Test Report: Hyper-V_Windows 19124

                    
                      b47018a41c76a7aa401be8ce52e856258110c967:2024-06-24:35020
                    
                

Test fail (19/134)

x
+
TestAddons/parallel/Registry (71.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 26.9725ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kkmh9" [a8ba7278-c4d6-454b-9ff6-2599925bf8f1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0090362s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4pllp" [c424816d-3d97-47f4-96b4-ee6359f55fbe] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0218191s
addons_test.go:342: (dbg) Run:  kubectl --context addons-517800 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-517800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-517800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.4406206s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 ip: (2.8651259s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0624 03:28:39.759972    6272 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-517800 ip"
2024/06/24 03:28:42 [DEBUG] GET http://172.31.209.187:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable registry --alsologtostderr -v=1: (14.6946579s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-517800 -n addons-517800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-517800 -n addons-517800: (12.2907145s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 logs -n 25: (9.527594s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT |                     |
	|         | -p download-only-455700                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:20 PDT |
	| delete  | -p download-only-455700                                                                     | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:20 PDT |
	| start   | -o=json --download-only                                                                     | download-only-067200 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT |                     |
	|         | -p download-only-067200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:20 PDT |
	| delete  | -p download-only-067200                                                                     | download-only-067200 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:21 PDT |
	| delete  | -p download-only-455700                                                                     | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:21 PDT |
	| delete  | -p download-only-067200                                                                     | download-only-067200 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:21 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | binary-mirror-877500                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:61584                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-877500                                                                     | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:21 PDT |
	| addons  | disable dashboard -p                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-517800 --wait=true                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:28 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
	| ip      | addons-517800 ip                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-517800 ssh cat                                                                       | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT |                     |
	|         | /opt/local-path-provisioner/pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5_default_test-pvc/file1 |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:21:10
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:21:10.051598    8244 out.go:291] Setting OutFile to fd 872 ...
	I0624 03:21:10.051833    8244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:21:10.051833    8244 out.go:304] Setting ErrFile to fd 720...
	I0624 03:21:10.051833    8244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:21:10.077538    8244 out.go:298] Setting JSON to false
	I0624 03:21:10.080583    8244 start.go:129] hostinfo: {"hostname":"minikube1","uptime":14925,"bootTime":1719209544,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:21:10.080583    8244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:21:10.089611    8244 out.go:177] * [addons-517800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:21:10.093428    8244 notify.go:220] Checking for updates...
	I0624 03:21:10.093746    8244 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:21:10.098866    8244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:21:10.101147    8244 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:21:10.103066    8244 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:21:10.104808    8244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:21:10.108094    8244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:21:15.217947    8244 out.go:177] * Using the hyperv driver based on user configuration
	I0624 03:21:15.221899    8244 start.go:297] selected driver: hyperv
	I0624 03:21:15.221899    8244 start.go:901] validating driver "hyperv" against <nil>
	I0624 03:21:15.221899    8244 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:21:15.274042    8244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:21:15.274335    8244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:21:15.274335    8244 cni.go:84] Creating CNI manager for ""
	I0624 03:21:15.274335    8244 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:21:15.275843    8244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:21:15.276006    8244 start.go:340] cluster config:
	{Name:addons-517800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-517800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:21:15.276006    8244 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:21:15.276831    8244 out.go:177] * Starting "addons-517800" primary control-plane node in "addons-517800" cluster
	I0624 03:21:15.282321    8244 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:21:15.283353    8244 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:21:15.283353    8244 cache.go:56] Caching tarball of preloaded images
	I0624 03:21:15.283504    8244 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:21:15.283850    8244 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:21:15.284050    8244 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\config.json ...
	I0624 03:21:15.284050    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\config.json: {Name:mk61c1b058a7f394d396946bd11c84b531b21cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:21:15.285714    8244 start.go:360] acquireMachinesLock for addons-517800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:21:15.286240    8244 start.go:364] duration metric: took 482.3µs to acquireMachinesLock for "addons-517800"
	I0624 03:21:15.286391    8244 start.go:93] Provisioning new machine with config: &{Name:addons-517800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:addons-517800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:21:15.286391    8244 start.go:125] createHost starting for "" (driver="hyperv")
	I0624 03:21:15.286949    8244 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0624 03:21:15.290702    8244 start.go:159] libmachine.API.Create for "addons-517800" (driver="hyperv")
	I0624 03:21:15.290702    8244 client.go:168] LocalClient.Create starting
	I0624 03:21:15.291107    8244 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 03:21:15.426026    8244 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 03:21:15.550068    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 03:21:17.433617    8244 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 03:21:17.433617    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:17.433944    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 03:21:19.026242    8244 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 03:21:19.026242    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:19.034393    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 03:21:20.426053    8244 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 03:21:20.426053    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:20.434517    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 03:21:23.846634    8244 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 03:21:23.846634    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:23.858897    8244 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 03:21:24.306192    8244 main.go:141] libmachine: Creating SSH key...
	I0624 03:21:24.374746    8244 main.go:141] libmachine: Creating VM...
	I0624 03:21:24.375246    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 03:21:27.023176    8244 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 03:21:27.023176    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:27.032949    8244 main.go:141] libmachine: Using switch "Default Switch"
	I0624 03:21:27.033047    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 03:21:28.627754    8244 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 03:21:28.627754    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:28.627754    8244 main.go:141] libmachine: Creating VHD
	I0624 03:21:28.634954    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 03:21:32.283668    8244 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 05FCA1C0-2AE4-4860-A9D5-56C5D538425C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 03:21:32.283668    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:32.283668    8244 main.go:141] libmachine: Writing magic tar header
	I0624 03:21:32.283668    8244 main.go:141] libmachine: Writing SSH key tar header
	I0624 03:21:32.292877    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 03:21:35.315304    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:35.315304    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:35.325273    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\disk.vhd' -SizeBytes 20000MB
	I0624 03:21:37.677242    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:37.677242    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:37.686632    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-517800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0624 03:21:41.124668    8244 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-517800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 03:21:41.124668    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:41.124852    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-517800 -DynamicMemoryEnabled $false
	I0624 03:21:43.244220    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:43.244585    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:43.244664    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-517800 -Count 2
	I0624 03:21:45.234073    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:45.234073    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:45.234073    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-517800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\boot2docker.iso'
	I0624 03:21:47.697331    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:47.697331    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:47.697331    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-517800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\disk.vhd'
	I0624 03:21:50.332847    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:50.332847    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:50.332847    8244 main.go:141] libmachine: Starting VM...
	I0624 03:21:50.341247    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-517800
	I0624 03:21:53.654216    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:53.654280    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:53.654280    8244 main.go:141] libmachine: Waiting for host to start...
	I0624 03:21:53.654280    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:21:56.031681    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:21:56.037151    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:56.037151    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:21:58.624903    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:21:58.624903    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:21:59.627931    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:01.816957    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:01.816957    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:01.816957    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:04.283604    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:22:04.283604    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:05.288484    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:07.365227    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:07.365227    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:07.365227    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:09.793039    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:22:09.793039    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:10.810134    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:12.929678    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:12.929678    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:12.940044    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:15.361653    8244 main.go:141] libmachine: [stdout =====>] : 
	I0624 03:22:15.361653    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:16.366656    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:18.483322    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:18.483322    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:18.484544    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:20.892910    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:20.892910    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:20.903017    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:22.892108    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:22.892350    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:22.892350    8244 machine.go:94] provisionDockerMachine start ...
	I0624 03:22:22.892532    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:24.837620    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:24.837620    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:24.846837    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:27.259060    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:27.259060    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:27.275290    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:22:27.284930    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:22:27.284930    8244 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:22:27.412526    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 03:22:27.412616    8244 buildroot.go:166] provisioning hostname "addons-517800"
	I0624 03:22:27.412756    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:29.471585    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:29.471585    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:29.482247    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:31.891847    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:31.891847    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:31.908061    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:22:31.908589    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:22:31.908775    8244 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-517800 && echo "addons-517800" | sudo tee /etc/hostname
	I0624 03:22:32.061740    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-517800
	
	I0624 03:22:32.061958    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:34.025204    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:34.025204    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:34.025204    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:36.387722    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:36.387722    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:36.393771    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:22:36.394500    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:22:36.394500    8244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-517800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-517800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-517800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:22:36.533045    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:22:36.533175    8244 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:22:36.533288    8244 buildroot.go:174] setting up certificates
	I0624 03:22:36.533333    8244 provision.go:84] configureAuth start
	I0624 03:22:36.533430    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:38.612367    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:38.612367    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:38.623440    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:41.021255    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:41.031664    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:41.032012    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:43.095968    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:43.095968    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:43.107424    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:45.486083    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:45.486083    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:45.486211    8244 provision.go:143] copyHostCerts
	I0624 03:22:45.487163    8244 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:22:45.488768    8244 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:22:45.489706    8244 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:22:45.491155    8244 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-517800 san=[127.0.0.1 172.31.209.187 addons-517800 localhost minikube]
	I0624 03:22:45.647087    8244 provision.go:177] copyRemoteCerts
	I0624 03:22:45.656690    8244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:22:45.656690    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:47.645205    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:47.655743    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:47.655871    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:50.025478    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:50.036291    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:50.036801    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:22:50.142151    8244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4854431s)
	I0624 03:22:50.142873    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:22:50.184349    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0624 03:22:50.225592    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:22:50.262266    8244 provision.go:87] duration metric: took 13.7288795s to configureAuth
	I0624 03:22:50.262266    8244 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:22:50.269476    8244 config.go:182] Loaded profile config "addons-517800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:22:50.269595    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:52.303768    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:52.303768    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:52.303768    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:54.749223    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:54.749223    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:54.757784    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:22:54.758845    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:22:54.758845    8244 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:22:54.891367    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:22:54.891489    8244 buildroot.go:70] root file system type: tmpfs
	I0624 03:22:54.891827    8244 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:22:54.891915    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:22:56.900811    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:22:56.900811    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:56.911886    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:22:59.319649    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:22:59.330715    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:22:59.336772    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:22:59.336921    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:22:59.336921    8244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:22:59.494395    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:22:59.494943    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:01.536640    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:01.548361    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:01.548361    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:03.896313    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:03.896313    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:03.914037    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:23:03.914565    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:23:03.914757    8244 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:23:05.932385    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 03:23:05.932429    8244 machine.go:97] duration metric: took 43.0398337s to provisionDockerMachine
	I0624 03:23:05.932479    8244 client.go:171] duration metric: took 1m50.6413446s to LocalClient.Create
	I0624 03:23:05.932534    8244 start.go:167] duration metric: took 1m50.6413446s to libmachine.API.Create "addons-517800"
	I0624 03:23:05.932584    8244 start.go:293] postStartSetup for "addons-517800" (driver="hyperv")
	I0624 03:23:05.932584    8244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:23:05.944973    8244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:23:05.944973    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:07.928784    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:07.928860    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:07.929112    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:10.318178    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:10.318178    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:10.329820    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:23:10.431053    8244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4859655s)
	I0624 03:23:10.444197    8244 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:23:10.451129    8244 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:23:10.451259    8244 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:23:10.451940    8244 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:23:10.452251    8244 start.go:296] duration metric: took 4.5196487s for postStartSetup
	I0624 03:23:10.455006    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:12.436201    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:12.436201    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:12.436201    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:14.864689    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:14.864689    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:14.865057    8244 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\config.json ...
	I0624 03:23:14.867784    8244 start.go:128] duration metric: took 1m59.5809256s to createHost
	I0624 03:23:14.867862    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:16.878228    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:16.878228    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:16.890393    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:19.267083    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:19.267083    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:19.284722    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:23:19.285538    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:23:19.285538    8244 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:23:19.413706    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719224599.420805840
	
	I0624 03:23:19.413706    8244 fix.go:216] guest clock: 1719224599.420805840
	I0624 03:23:19.413815    8244 fix.go:229] Guest: 2024-06-24 03:23:19.42080584 -0700 PDT Remote: 2024-06-24 03:23:14.8678627 -0700 PDT m=+124.900683901 (delta=4.55294314s)
	I0624 03:23:19.413815    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:21.408344    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:21.419878    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:21.420119    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:23.815346    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:23.815346    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:23.821174    8244 main.go:141] libmachine: Using SSH client type: native
	I0624 03:23:23.821857    8244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.209.187 22 <nil> <nil>}
	I0624 03:23:23.821857    8244 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719224599
	I0624 03:23:23.958730    8244 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:23:19 UTC 2024
	
	I0624 03:23:23.958730    8244 fix.go:236] clock set: Mon Jun 24 10:23:19 UTC 2024
	 (err=<nil>)
	I0624 03:23:23.958730    8244 start.go:83] releasing machines lock for "addons-517800", held for 2m8.6719863s
	I0624 03:23:23.958945    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:25.931854    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:25.931854    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:25.931854    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:28.332472    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:28.332472    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:28.336989    8244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:23:28.336989    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:28.349458    8244 ssh_runner.go:195] Run: cat /version.json
	I0624 03:23:28.349458    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:23:30.497467    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:30.497467    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:30.497467    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:23:30.497663    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:30.497597    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:30.497774    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:23:33.072395    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:33.072395    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:33.083482    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:23:33.105617    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:23:33.105617    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:23:33.106247    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:23:33.185516    8244 ssh_runner.go:235] Completed: cat /version.json: (4.8360392s)
	I0624 03:23:33.198539    8244 ssh_runner.go:195] Run: systemctl --version
	I0624 03:23:33.255124    8244 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9181149s)
	I0624 03:23:33.265492    8244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 03:23:33.278194    8244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:23:33.289441    8244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:23:33.316228    8244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 03:23:33.316379    8244 start.go:494] detecting cgroup driver to use...
	I0624 03:23:33.317072    8244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:23:33.361501    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:23:33.393282    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:23:33.408588    8244 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:23:33.426650    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:23:33.456076    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:23:33.490616    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:23:33.522738    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:23:33.554031    8244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:23:33.585639    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:23:33.616976    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:23:33.649141    8244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:23:33.681322    8244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:23:33.714128    8244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:23:33.746628    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:33.945842    8244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:23:33.971086    8244 start.go:494] detecting cgroup driver to use...
	I0624 03:23:33.992550    8244 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:23:34.025883    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:23:34.062800    8244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:23:34.111638    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:23:34.148272    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:23:34.180480    8244 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 03:23:34.246002    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:23:34.273289    8244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:23:34.318388    8244 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:23:34.341923    8244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:23:34.366681    8244 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:23:34.414751    8244 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:23:34.611898    8244 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:23:34.799209    8244 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:23:34.799465    8244 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:23:34.842694    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:35.016430    8244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:23:37.476404    8244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4505207s)
	I0624 03:23:37.487919    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 03:23:37.522130    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:23:37.554678    8244 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 03:23:37.747043    8244 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 03:23:37.931319    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:38.112029    8244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 03:23:38.150875    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 03:23:38.194809    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:38.380455    8244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 03:23:38.486474    8244 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 03:23:38.501751    8244 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 03:23:38.511500    8244 start.go:562] Will wait 60s for crictl version
	I0624 03:23:38.524923    8244 ssh_runner.go:195] Run: which crictl
	I0624 03:23:38.545039    8244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 03:23:38.596473    8244 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 03:23:38.607290    8244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:23:38.655477    8244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 03:23:38.690361    8244 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 03:23:38.690613    8244 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 03:23:38.695088    8244 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 03:23:38.695088    8244 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 03:23:38.695088    8244 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 03:23:38.695088    8244 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 03:23:38.697014    8244 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 03:23:38.697014    8244 ip.go:210] interface addr: 172.31.208.1/20
	I0624 03:23:38.711404    8244 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 03:23:38.713376    8244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:23:38.737418    8244 kubeadm.go:877] updating cluster {Name:addons-517800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:addons-517800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.209.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 03:23:38.738059    8244 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:23:38.747437    8244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:23:38.768426    8244 docker.go:685] Got preloaded images: 
	I0624 03:23:38.768426    8244 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0624 03:23:38.781369    8244 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:23:38.814984    8244 ssh_runner.go:195] Run: which lz4
	I0624 03:23:38.838748    8244 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 03:23:38.847907    8244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 03:23:38.848098    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0624 03:23:40.725031    8244 docker.go:649] duration metric: took 1.8992836s to copy over tarball
	I0624 03:23:40.736635    8244 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 03:23:45.772352    8244 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.0356972s)
	I0624 03:23:45.772544    8244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 03:23:45.835938    8244 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 03:23:45.852752    8244 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0624 03:23:45.895868    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:46.080958    8244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:23:51.736198    8244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6551326s)
	I0624 03:23:51.745288    8244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 03:23:51.770549    8244 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 03:23:51.770549    8244 cache_images.go:84] Images are preloaded, skipping loading
	I0624 03:23:51.770549    8244 kubeadm.go:928] updating node { 172.31.209.187 8443 v1.30.2 docker true true} ...
	I0624 03:23:51.770874    8244 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-517800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.209.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-517800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 03:23:51.781525    8244 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 03:23:51.818032    8244 cni.go:84] Creating CNI manager for ""
	I0624 03:23:51.818032    8244 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:23:51.818032    8244 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 03:23:51.818032    8244 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.209.187 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-517800 NodeName:addons-517800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.209.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.209.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 03:23:51.818567    8244 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.209.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-517800"
	  kubeletExtraArgs:
	    node-ip: 172.31.209.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.209.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 03:23:51.830690    8244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 03:23:51.848257    8244 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 03:23:51.860836    8244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 03:23:51.876118    8244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0624 03:23:51.903049    8244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 03:23:51.933275    8244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0624 03:23:51.975482    8244 ssh_runner.go:195] Run: grep 172.31.209.187	control-plane.minikube.internal$ /etc/hosts
	I0624 03:23:51.981121    8244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.209.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 03:23:52.011648    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:23:52.180728    8244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:23:52.204918    8244 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800 for IP: 172.31.209.187
	I0624 03:23:52.204918    8244 certs.go:194] generating shared ca certs ...
	I0624 03:23:52.205073    8244 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:52.205387    8244 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 03:23:52.314724    8244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0624 03:23:52.314724    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:52.324804    8244 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0624 03:23:52.324976    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:52.325170    8244 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 03:23:52.583985    8244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0624 03:23:52.583985    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:52.590560    8244 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0624 03:23:52.590560    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:52.592541    8244 certs.go:256] generating profile certs ...
	I0624 03:23:52.593286    8244 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.key
	I0624 03:23:52.593286    8244 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt with IP's: []
	I0624 03:23:52.998591    8244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt ...
	I0624 03:23:52.998591    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: {Name:mk41e21469a762f858d1fc211efc66c72d3723b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.002258    8244 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.key ...
	I0624 03:23:53.002258    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.key: {Name:mkbad30b2350259a7312683227d35d24fbdb5573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.003527    8244 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key.f5db5329
	I0624 03:23:53.004850    8244 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt.f5db5329 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.209.187]
	I0624 03:23:53.145358    8244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt.f5db5329 ...
	I0624 03:23:53.145358    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt.f5db5329: {Name:mkaed2b7a28a0cb21575baab9e8722dabd174b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.152085    8244 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key.f5db5329 ...
	I0624 03:23:53.152085    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key.f5db5329: {Name:mka375bf0c2a8445d23dfa3a782c61abbb661754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.153508    8244 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt.f5db5329 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt
	I0624 03:23:53.155884    8244 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key.f5db5329 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key
	I0624 03:23:53.166872    8244 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.key
	I0624 03:23:53.166872    8244 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.crt with IP's: []
	I0624 03:23:53.258850    8244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.crt ...
	I0624 03:23:53.258850    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.crt: {Name:mk0371b2fcdc0b759a722562856f901261a80690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.265442    8244 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.key ...
	I0624 03:23:53.265442    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.key: {Name:mkb8182e5331ea39d2e8b673e999a845a24ad30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:23:53.269256    8244 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 03:23:53.277402    8244 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 03:23:53.277652    8244 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 03:23:53.277913    8244 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 03:23:53.278174    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 03:23:53.329511    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 03:23:53.378756    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 03:23:53.432358    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 03:23:53.475275    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0624 03:23:53.519413    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0624 03:23:53.562605    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 03:23:53.609959    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 03:23:53.658778    8244 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 03:23:53.700770    8244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 03:23:53.740039    8244 ssh_runner.go:195] Run: openssl version
	I0624 03:23:53.764046    8244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 03:23:53.797931    8244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:23:53.808699    8244 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:23:53.824801    8244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 03:23:53.843965    8244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 03:23:53.881276    8244 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 03:23:53.887830    8244 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 03:23:53.887910    8244 kubeadm.go:391] StartCluster: {Name:addons-517800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:addons-517800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.209.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:23:53.897560    8244 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 03:23:53.933779    8244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 03:23:53.965096    8244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 03:23:53.994815    8244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 03:23:54.011790    8244 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 03:23:54.011856    8244 kubeadm.go:156] found existing configuration files:
	
	I0624 03:23:54.024522    8244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 03:23:54.040086    8244 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 03:23:54.053169    8244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 03:23:54.084446    8244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 03:23:54.100317    8244 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 03:23:54.113263    8244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 03:23:54.142191    8244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 03:23:54.160493    8244 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 03:23:54.176331    8244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 03:23:54.205543    8244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 03:23:54.214298    8244 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 03:23:54.236843    8244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 03:23:54.256236    8244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 03:23:54.313944    8244 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0624 03:23:54.327732    8244 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 03:23:54.488183    8244 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 03:23:54.488563    8244 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 03:23:54.488805    8244 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 03:23:54.759426    8244 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 03:23:54.770944    8244 out.go:204]   - Generating certificates and keys ...
	I0624 03:23:54.771234    8244 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 03:23:54.771404    8244 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 03:23:54.902614    8244 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0624 03:23:55.031550    8244 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0624 03:23:55.134165    8244 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0624 03:23:55.250659    8244 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0624 03:23:55.445130    8244 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0624 03:23:55.449256    8244 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-517800 localhost] and IPs [172.31.209.187 127.0.0.1 ::1]
	I0624 03:23:55.765600    8244 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0624 03:23:55.766209    8244 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-517800 localhost] and IPs [172.31.209.187 127.0.0.1 ::1]
	I0624 03:23:55.912810    8244 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0624 03:23:56.141303    8244 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0624 03:23:56.353788    8244 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0624 03:23:56.360070    8244 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 03:23:56.435787    8244 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 03:23:56.631330    8244 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 03:23:56.907697    8244 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 03:23:57.167087    8244 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 03:23:57.238663    8244 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 03:23:57.243161    8244 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 03:23:57.249664    8244 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 03:23:57.254392    8244 out.go:204]   - Booting up control plane ...
	I0624 03:23:57.254606    8244 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 03:23:57.254928    8244 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 03:23:57.255109    8244 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 03:23:57.280474    8244 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 03:23:57.281468    8244 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 03:23:57.281580    8244 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 03:23:57.467325    8244 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0624 03:23:57.467430    8244 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 03:23:58.467943    8244 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001397053s
	I0624 03:23:58.468165    8244 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0624 03:24:04.969634    8244 kubeadm.go:309] [api-check] The API server is healthy after 6.501961926s
	I0624 03:24:04.990651    8244 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 03:24:05.016237    8244 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 03:24:05.066740    8244 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 03:24:05.067271    8244 kubeadm.go:309] [mark-control-plane] Marking the node addons-517800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 03:24:05.095584    8244 kubeadm.go:309] [bootstrap-token] Using token: kj2o91.95eusrwk588luf0u
	I0624 03:24:05.099854    8244 out.go:204]   - Configuring RBAC rules ...
	I0624 03:24:05.100518    8244 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 03:24:05.112548    8244 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 03:24:05.130552    8244 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 03:24:05.137333    8244 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 03:24:05.143326    8244 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 03:24:05.148735    8244 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 03:24:05.383108    8244 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 03:24:05.867559    8244 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 03:24:06.383699    8244 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 03:24:06.384374    8244 kubeadm.go:309] 
	I0624 03:24:06.386719    8244 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 03:24:06.386808    8244 kubeadm.go:309] 
	I0624 03:24:06.387010    8244 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 03:24:06.387086    8244 kubeadm.go:309] 
	I0624 03:24:06.387164    8244 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 03:24:06.387403    8244 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 03:24:06.387403    8244 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 03:24:06.387403    8244 kubeadm.go:309] 
	I0624 03:24:06.387403    8244 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 03:24:06.387403    8244 kubeadm.go:309] 
	I0624 03:24:06.387403    8244 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 03:24:06.387403    8244 kubeadm.go:309] 
	I0624 03:24:06.388003    8244 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 03:24:06.388260    8244 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 03:24:06.388478    8244 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 03:24:06.388478    8244 kubeadm.go:309] 
	I0624 03:24:06.388478    8244 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 03:24:06.388478    8244 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 03:24:06.388478    8244 kubeadm.go:309] 
	I0624 03:24:06.389090    8244 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kj2o91.95eusrwk588luf0u \
	I0624 03:24:06.389577    8244 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 \
	I0624 03:24:06.389703    8244 kubeadm.go:309] 	--control-plane 
	I0624 03:24:06.389703    8244 kubeadm.go:309] 
	I0624 03:24:06.389930    8244 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 03:24:06.390009    8244 kubeadm.go:309] 
	I0624 03:24:06.390203    8244 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kj2o91.95eusrwk588luf0u \
	I0624 03:24:06.390599    8244 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 03:24:06.390914    8244 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 03:24:06.390914    8244 cni.go:84] Creating CNI manager for ""
	I0624 03:24:06.390914    8244 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:24:06.395867    8244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0624 03:24:06.412837    8244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0624 03:24:06.430694    8244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0624 03:24:06.466438    8244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 03:24:06.479088    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-517800 minikube.k8s.io/updated_at=2024_06_24T03_24_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=addons-517800 minikube.k8s.io/primary=true
	I0624 03:24:06.479088    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:06.492046    8244 ops.go:34] apiserver oom_adj: -16
	I0624 03:24:06.671934    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:07.178646    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:07.671773    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:08.182776    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:08.672145    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:09.177182    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:09.683124    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:10.181714    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:10.674074    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:11.184054    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:11.673332    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:12.178039    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:12.684901    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:13.182413    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:13.681189    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:14.179769    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:14.669597    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:15.184241    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:15.675233    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:16.181251    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:16.678767    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:17.177459    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:17.685779    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:18.179005    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:18.671656    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:19.173619    8244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 03:24:19.274831    8244 kubeadm.go:1107] duration metric: took 12.8082269s to wait for elevateKubeSystemPrivileges
	W0624 03:24:19.274951    8244 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 03:24:19.274951    8244 kubeadm.go:393] duration metric: took 25.3869398s to StartCluster
	I0624 03:24:19.274951    8244 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:24:19.274951    8244 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:24:19.275665    8244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:24:19.277819    8244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0624 03:24:19.277943    8244 start.go:234] Will wait 6m0s for node &{Name: IP:172.31.209.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 03:24:19.278215    8244 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0624 03:24:19.278509    8244 addons.go:69] Setting yakd=true in profile "addons-517800"
	I0624 03:24:19.278590    8244 addons.go:234] Setting addon yakd=true in "addons-517800"
	I0624 03:24:19.278783    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.278783    8244 addons.go:69] Setting default-storageclass=true in profile "addons-517800"
	I0624 03:24:19.278859    8244 addons.go:69] Setting storage-provisioner=true in profile "addons-517800"
	I0624 03:24:19.278933    8244 addons.go:69] Setting ingress-dns=true in profile "addons-517800"
	I0624 03:24:19.278933    8244 addons.go:234] Setting addon storage-provisioner=true in "addons-517800"
	I0624 03:24:19.278999    8244 addons.go:234] Setting addon ingress-dns=true in "addons-517800"
	I0624 03:24:19.278999    8244 config.go:182] Loaded profile config "addons-517800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:19.278999    8244 addons.go:69] Setting registry=true in profile "addons-517800"
	I0624 03:24:19.279073    8244 addons.go:234] Setting addon registry=true in "addons-517800"
	I0624 03:24:19.279213    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.278999    8244 addons.go:69] Setting gcp-auth=true in profile "addons-517800"
	I0624 03:24:19.278999    8244 addons.go:69] Setting volcano=true in profile "addons-517800"
	I0624 03:24:19.279422    8244 addons.go:234] Setting addon volcano=true in "addons-517800"
	I0624 03:24:19.278933    8244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-517800"
	I0624 03:24:19.279589    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.279422    8244 mustload.go:65] Loading cluster: addons-517800
	I0624 03:24:19.278933    8244 addons.go:69] Setting helm-tiller=true in profile "addons-517800"
	I0624 03:24:19.279739    8244 addons.go:234] Setting addon helm-tiller=true in "addons-517800"
	I0624 03:24:19.279739    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.279073    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.279739    8244 config.go:182] Loaded profile config "addons-517800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:24:19.278783    8244 addons.go:69] Setting cloud-spanner=true in profile "addons-517800"
	I0624 03:24:19.280803    8244 addons.go:234] Setting addon cloud-spanner=true in "addons-517800"
	I0624 03:24:19.280944    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.278783    8244 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-517800"
	I0624 03:24:19.281017    8244 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-517800"
	I0624 03:24:19.278933    8244 addons.go:69] Setting ingress=true in profile "addons-517800"
	I0624 03:24:19.278999    8244 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-517800"
	I0624 03:24:19.281299    8244 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-517800"
	I0624 03:24:19.281299    8244 addons.go:234] Setting addon ingress=true in "addons-517800"
	I0624 03:24:19.278999    8244 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-517800"
	I0624 03:24:19.281453    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.281532    8244 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-517800"
	I0624 03:24:19.278999    8244 addons.go:69] Setting metrics-server=true in profile "addons-517800"
	I0624 03:24:19.281673    8244 addons.go:234] Setting addon metrics-server=true in "addons-517800"
	I0624 03:24:19.281746    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.281809    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.281809    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.278999    8244 addons.go:69] Setting volumesnapshots=true in profile "addons-517800"
	I0624 03:24:19.282058    8244 addons.go:234] Setting addon volumesnapshots=true in "addons-517800"
	I0624 03:24:19.282120    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.278783    8244 addons.go:69] Setting inspektor-gadget=true in profile "addons-517800"
	I0624 03:24:19.282396    8244 addons.go:234] Setting addon inspektor-gadget=true in "addons-517800"
	I0624 03:24:19.282456    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.282521    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.279275    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.282658    8244 out.go:177] * Verifying Kubernetes components...
	I0624 03:24:19.281235    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:19.285110    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.286452    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.287328    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.287871    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.289655    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.291577    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.306516    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.306651    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.307196    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.309027    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.310880    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.310880    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.310880    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.310880    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:19.318035    8244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:24:20.827329    8244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.549414s)
	I0624 03:24:20.827329    8244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0624 03:24:20.827329    8244 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.5092873s)
	I0624 03:24:20.843640    8244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 03:24:23.906199    8244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.0788578s)
	I0624 03:24:23.906199    8244 start.go:946] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0624 03:24:23.906756    8244 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.0631032s)
	I0624 03:24:23.984947    8244 node_ready.go:35] waiting up to 6m0s for node "addons-517800" to be "Ready" ...
	I0624 03:24:24.451758    8244 node_ready.go:49] node "addons-517800" has status "Ready":"True"
	I0624 03:24:24.451758    8244 node_ready.go:38] duration metric: took 466.8095ms for node "addons-517800" to be "Ready" ...
	I0624 03:24:24.451758    8244 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 03:24:24.678074    8244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace to be "Ready" ...
	W0624 03:24:24.791560    8244 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-517800" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0624 03:24:24.791697    8244 start.go:159] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0624 03:24:25.895627    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:25.895627    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:25.901345    8244 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0624 03:24:25.905944    8244 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0624 03:24:25.905944    8244 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0624 03:24:25.905944    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:25.927662    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:25.927727    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:25.933122    8244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0624 03:24:25.953102    8244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0624 03:24:25.966505    8244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0624 03:24:25.976507    8244 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0624 03:24:25.976507    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0624 03:24:25.976507    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.020945    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.020945    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.027221    8244 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0624 03:24:26.033289    8244 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0624 03:24:26.033289    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0624 03:24:26.033289    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.163853    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.163853    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.179117    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0624 03:24:26.185347    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0624 03:24:26.202535    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0624 03:24:26.211774    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0624 03:24:26.215173    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0624 03:24:26.224324    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0624 03:24:26.230942    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0624 03:24:26.230942    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0624 03:24:26.241204    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0624 03:24:26.241204    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0624 03:24:26.241204    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.310234    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.310234    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.344180    8244 addons.go:234] Setting addon default-storageclass=true in "addons-517800"
	I0624 03:24:26.344180    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:26.344180    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.359876    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.359876    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.359876    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.359876    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.364333    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.364333    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.367132    8244 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0624 03:24:26.369861    8244 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0624 03:24:26.370499    8244 out.go:177]   - Using image docker.io/registry:2.8.3
	I0624 03:24:26.371520    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.372974    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.374147    8244 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0624 03:24:26.377004    8244 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0624 03:24:26.377004    8244 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0624 03:24:26.377004    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.379249    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.379992    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.380530    8244 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0624 03:24:26.383404    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.381280    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.385361    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.381921    8244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 03:24:26.387130    8244 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0624 03:24:26.388600    8244 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0624 03:24:26.397594    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.397594    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.402717    8244 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0624 03:24:26.411105    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0624 03:24:26.407930    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0624 03:24:26.411105    8244 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0624 03:24:26.411105    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.407930    8244 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0624 03:24:26.411105    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.412924    8244 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:24:26.421459    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 03:24:26.421679    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.422657    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.422657    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.422716    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:26.415565    8244 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0624 03:24:26.427570    8244 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0624 03:24:26.444921    8244 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0624 03:24:26.446343    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0624 03:24:26.446446    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.454726    8244 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0624 03:24:26.488911    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.488911    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.492029    8244 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0624 03:24:26.492029    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0624 03:24:26.492029    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.516240    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:26.516240    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:26.517435    8244 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0624 03:24:26.517435    8244 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-517800"
	I0624 03:24:26.517435    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:26.522984    8244 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0624 03:24:26.522984    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0624 03:24:26.522984    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:26.535657    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:27.015115    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:27.052903    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:27.052903    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:27.068656    8244 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0624 03:24:27.070206    8244 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0624 03:24:27.070206    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0624 03:24:27.070206    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:29.027755    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:31.055353    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:32.501977    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.501977    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.501977    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.510625    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.510625    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.510625    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.513705    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.513785    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.514011    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.657777    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.702639    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.702639    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.702639    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.861812    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.861812    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.861812    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.916940    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.916940    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.934667    8244 out.go:177]   - Using image docker.io/busybox:stable
	I0624 03:24:32.978820    8244 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0624 03:24:32.988514    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.988514    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.988514    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:32.992423    8244 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0624 03:24:32.992423    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0624 03:24:32.992423    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:32.992423    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:32.992423    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:32.992423    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:33.073956    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:33.073956    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:33.073956    8244 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 03:24:33.073956    8244 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 03:24:33.073956    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:33.160920    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:33.164853    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:33.164853    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:33.164853    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:33.282827    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:33.282827    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:33.282827    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:33.325062    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:33.325127    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:33.325127    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:33.764228    8244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0624 03:24:33.764228    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:34.052762    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:34.052762    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:34.052762    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:37.016536    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:39.204169    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:39.463862    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:39.463862    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:39.463862    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:39.888797    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:39.888797    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:39.888797    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:39.924457    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:39.924529    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:39.924815    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.004083    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.004083    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.004463    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.085127    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.085127    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.085585    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.148547    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.148547    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.148547    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.199437    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0624 03:24:40.199496    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0624 03:24:40.208050    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.208050    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.208050    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.254314    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.254370    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.254370    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.275206    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0624 03:24:40.275206    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0624 03:24:40.314936    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.315031    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.315287    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.390760    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.390760    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.391587    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.419457    8244 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0624 03:24:40.419556    8244 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0624 03:24:40.442986    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.442986    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.443453    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.487521    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0624 03:24:40.496103    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.496308    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.496607    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.497473    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0624 03:24:40.497473    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0624 03:24:40.561307    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.561307    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.561604    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.587291    8244 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0624 03:24:40.587291    8244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0624 03:24:40.646452    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:40.646744    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.647054    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:40.653408    8244 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0624 03:24:40.653500    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0624 03:24:40.683960    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:40.683960    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:40.683960    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:40.732277    8244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0624 03:24:40.732393    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0624 03:24:40.759681    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 03:24:40.789978    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0624 03:24:40.790105    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0624 03:24:40.856933    8244 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0624 03:24:40.857088    8244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0624 03:24:40.911116    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0624 03:24:40.920876    8244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0624 03:24:40.920987    8244 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0624 03:24:40.951152    8244 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0624 03:24:40.951152    8244 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0624 03:24:40.983976    8244 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0624 03:24:40.983976    8244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0624 03:24:41.095189    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0624 03:24:41.162301    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0624 03:24:41.174264    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:41.174321    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:41.174490    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:41.187319    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0624 03:24:41.187628    8244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0624 03:24:41.187628    8244 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0624 03:24:41.211114    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0624 03:24:41.211235    8244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0624 03:24:41.252222    8244 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0624 03:24:41.252222    8244 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0624 03:24:41.332195    8244 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0624 03:24:41.332246    8244 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0624 03:24:41.343860    8244 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0624 03:24:41.343911    8244 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0624 03:24:41.363941    8244 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0624 03:24:41.363941    8244 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0624 03:24:41.488138    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0624 03:24:41.524385    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0624 03:24:41.524447    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0624 03:24:41.550478    8244 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0624 03:24:41.550566    8244 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0624 03:24:41.550636    8244 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0624 03:24:41.550715    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0624 03:24:41.610493    8244 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0624 03:24:41.610567    8244 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0624 03:24:41.642400    8244 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0624 03:24:41.642474    8244 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0624 03:24:41.683947    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0624 03:24:41.684040    8244 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0624 03:24:41.687872    8244 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0624 03:24:41.687872    8244 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0624 03:24:41.693894    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:41.735598    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0624 03:24:41.810466    8244 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0624 03:24:41.810531    8244 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0624 03:24:41.889809    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0624 03:24:41.926986    8244 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0624 03:24:41.926986    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0624 03:24:41.928328    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0624 03:24:41.928328    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0624 03:24:42.010534    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0624 03:24:42.171949    8244 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0624 03:24:42.171949    8244 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0624 03:24:42.187711    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0624 03:24:42.251106    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0624 03:24:42.251168    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0624 03:24:42.553568    8244 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0624 03:24:42.553636    8244 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0624 03:24:42.621990    8244 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0624 03:24:42.622120    8244 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0624 03:24:42.808574    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:42.808745    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:42.809093    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:42.925739    8244 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0624 03:24:42.925823    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0624 03:24:42.977757    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:42.977801    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:42.978025    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:43.155491    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0624 03:24:43.201870    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0624 03:24:43.620323    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:43.620323    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:43.620323    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:43.703119    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:43.882143    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 03:24:44.020377    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0624 03:24:44.947429    8244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0624 03:24:45.227768    8244 addons.go:234] Setting addon gcp-auth=true in "addons-517800"
	I0624 03:24:45.227889    8244 host.go:66] Checking if "addons-517800" exists ...
	I0624 03:24:45.229232    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:45.711233    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:47.561944    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:47.561944    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:47.574640    8244 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0624 03:24:47.574640    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-517800 ).state
	I0624 03:24:48.243569    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:50.086965    8244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:24:50.086965    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:50.087138    8244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-517800 ).networkadapters[0]).ipaddresses[0]
	I0624 03:24:50.705208    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:52.895828    8244 main.go:141] libmachine: [stdout =====>] : 172.31.209.187
	
	I0624 03:24:52.895828    8244 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:24:52.895828    8244 sshutil.go:53] new ssh client: &{IP:172.31.209.187 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-517800\id_rsa Username:docker}
	I0624 03:24:53.225026    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:53.770955    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.2833816s)
	I0624 03:24:53.771023    8244 addons.go:475] Verifying addon ingress=true in "addons-517800"
	I0624 03:24:53.771023    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.0112226s)
	I0624 03:24:53.771111    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.8598562s)
	I0624 03:24:53.771111    8244 addons.go:475] Verifying addon registry=true in "addons-517800"
	I0624 03:24:53.776380    8244 out.go:177] * Verifying ingress addon...
	I0624 03:24:53.778630    8244 out.go:177] * Verifying registry addon...
	I0624 03:24:53.782354    8244 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0624 03:24:53.783695    8244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0624 03:24:53.785452    8244 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0624 03:24:53.785452    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:53.791311    8244 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0624 03:24:53.791311    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:54.297526    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:54.305774    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:54.796137    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:54.796137    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:55.351685    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:55.351685    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:55.352284    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:55.824813    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:55.825699    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:56.312186    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:56.344766    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:56.892293    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:56.893053    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:57.327039    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:57.327768    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:57.665699    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (16.5033316s)
	I0624 03:24:57.665699    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (16.5703644s)
	I0624 03:24:57.665874    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (16.4783142s)
	I0624 03:24:57.665948    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (16.1776819s)
	I0624 03:24:57.666017    8244 addons.go:475] Verifying addon metrics-server=true in "addons-517800"
	I0624 03:24:57.666017    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (15.9298191s)
	I0624 03:24:57.666276    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (15.7764038s)
	I0624 03:24:57.666276    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.6556789s)
	W0624 03:24:57.666276    8244 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0624 03:24:57.666276    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (15.4785032s)
	I0624 03:24:57.666276    8244 retry.go:31] will retry after 162.152681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0624 03:24:57.669010    8244 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-517800 service yakd-dashboard -n yakd-dashboard
	
	I0624 03:24:57.859475    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0624 03:24:57.886405    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:24:57.937298    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:57.988632    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:58.128344    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (14.9264144s)
	I0624 03:24:58.128344    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.9727932s)
	I0624 03:24:58.128582    8244 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-517800"
	I0624 03:24:58.128431    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.2461442s)
	I0624 03:24:58.128486    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.1080533s)
	I0624 03:24:58.128527    8244 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.5538441s)
	I0624 03:24:58.131448    8244 out.go:177] * Verifying csi-hostpath-driver addon...
	I0624 03:24:58.135669    8244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0624 03:24:58.141020    8244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0624 03:24:58.141020    8244 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0624 03:24:58.143862    8244 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0624 03:24:58.143862    8244 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0624 03:24:58.195184    8244 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0624 03:24:58.195184    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0624 03:24:58.247165    8244 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0624 03:24:58.304702    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:58.308695    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:58.385572    8244 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0624 03:24:58.385646    8244 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0624 03:24:58.489441    8244 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0624 03:24:58.489508    8244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0624 03:24:58.586890    8244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0624 03:24:58.668019    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:24:58.799229    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:58.799464    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:59.158209    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:24:59.309360    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:24:59.313000    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:59.650919    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:24:59.794163    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:24:59.794824    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:00.159652    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:00.194087    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:00.301404    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.4418795s)
	I0624 03:25:00.305427    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:00.315483    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:00.470963    8244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.8840116s)
	I0624 03:25:00.480849    8244 addons.go:475] Verifying addon gcp-auth=true in "addons-517800"
	I0624 03:25:00.486960    8244 out.go:177] * Verifying gcp-auth addon...
	I0624 03:25:00.492889    8244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0624 03:25:00.505633    8244 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0624 03:25:00.665510    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:00.794466    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:00.794466    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:01.150504    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:01.305910    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:01.307221    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:01.668010    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:01.794901    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:01.794901    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:02.167531    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:02.309533    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:02.309533    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:02.650429    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:02.688014    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:02.806025    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:02.806502    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:03.159971    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:03.300096    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:03.300481    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:03.666462    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:03.791731    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:03.796372    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:04.149633    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:04.308857    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:04.309563    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:04.657979    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:04.688852    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:04.807056    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:04.810991    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:05.161020    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:05.193680    8244 pod_ready.go:92] pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:05.193680    8244 pod_ready.go:81] duration metric: took 40.515444s for pod "coredns-7db6d8ff4d-67bql" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:05.193680    8244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:05.308013    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:05.309841    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:05.665382    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:05.800835    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:05.800835    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:06.157097    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:06.291932    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:06.294257    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:06.665158    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:06.803903    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:06.804251    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:07.157630    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:07.218337    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:07.306503    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:07.316239    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:07.759603    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:08.253912    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:08.259417    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:08.259659    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:08.295584    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:08.297970    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:08.652598    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:08.800698    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:08.801406    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:09.159936    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:09.222269    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:09.299756    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:09.300120    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:09.663906    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:09.805538    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:09.808284    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:10.164409    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:10.569308    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:11.747540    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:11.755986    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:11.759565    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:11.806637    8244 pod_ready.go:102] pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace has status "Ready":"False"
	I0624 03:25:11.810262    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:11.811255    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:11.821457    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:11.826984    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:11.832139    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:12.158587    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:12.300481    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:12.300661    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:12.652230    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:12.815495    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:12.820635    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:13.161599    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:13.294230    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:13.300501    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:13.668873    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:13.795883    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:13.796754    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:14.165045    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:14.246235    8244 pod_ready.go:92] pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.246311    8244 pod_ready.go:81] duration metric: took 9.052595s for pod "coredns-7db6d8ff4d-q4s6h" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.246363    8244 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.252157    8244 pod_ready.go:92] pod "etcd-addons-517800" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.252157    8244 pod_ready.go:81] duration metric: took 5.7941ms for pod "etcd-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.252157    8244 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.277914    8244 pod_ready.go:92] pod "kube-apiserver-addons-517800" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.277914    8244 pod_ready.go:81] duration metric: took 25.7568ms for pod "kube-apiserver-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.277978    8244 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.291786    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:14.293381    8244 pod_ready.go:92] pod "kube-controller-manager-addons-517800" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.293381    8244 pod_ready.go:81] duration metric: took 15.4025ms for pod "kube-controller-manager-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.293486    8244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njdhk" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.303753    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:14.305886    8244 pod_ready.go:92] pod "kube-proxy-njdhk" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.305886    8244 pod_ready.go:81] duration metric: took 12.3998ms for pod "kube-proxy-njdhk" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.305886    8244 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.749761    8244 pod_ready.go:92] pod "kube-scheduler-addons-517800" in "kube-system" namespace has status "Ready":"True"
	I0624 03:25:14.749844    8244 pod_ready.go:81] duration metric: took 443.9564ms for pod "kube-scheduler-addons-517800" in "kube-system" namespace to be "Ready" ...
	I0624 03:25:14.749879    8244 pod_ready.go:38] duration metric: took 50.2978849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 03:25:14.750001    8244 api_server.go:52] waiting for apiserver process to appear ...
	I0624 03:25:14.761021    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:14.763876    8244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 03:25:14.795486    8244 api_server.go:72] duration metric: took 55.5171581s to wait for apiserver process to appear ...
	I0624 03:25:14.795577    8244 api_server.go:88] waiting for apiserver healthz status ...
	I0624 03:25:14.795612    8244 api_server.go:253] Checking apiserver healthz at https://172.31.209.187:8443/healthz ...
	I0624 03:25:14.804261    8244 api_server.go:279] https://172.31.209.187:8443/healthz returned 200:
	ok
	I0624 03:25:14.809320    8244 api_server.go:141] control plane version: v1.30.2
	I0624 03:25:14.809320    8244 api_server.go:131] duration metric: took 13.743ms to wait for apiserver health ...
	I0624 03:25:14.809400    8244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 03:25:14.810103    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:14.810467    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:14.828958    8244 system_pods.go:59] 19 kube-system pods found
	I0624 03:25:14.828958    8244 system_pods.go:61] "coredns-7db6d8ff4d-67bql" [3ea28df4-1203-422c-8479-7f1e5c6d3adc] Running
	I0624 03:25:14.828958    8244 system_pods.go:61] "coredns-7db6d8ff4d-q4s6h" [e2ec9f01-e85c-4466-9b12-20393b831457] Running
	I0624 03:25:14.828958    8244 system_pods.go:61] "csi-hostpath-attacher-0" [1530eea4-cc9f-4764-8e39-5ffd7ea801df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0624 03:25:14.828958    8244 system_pods.go:61] "csi-hostpath-resizer-0" [ed9c0dae-31f4-481a-bb68-4df6865d5257] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0624 03:25:14.828958    8244 system_pods.go:61] "csi-hostpathplugin-qd89t" [c3a3a6ca-72ac-4e96-8c27-cbb73de786f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0624 03:25:14.828958    8244 system_pods.go:61] "etcd-addons-517800" [651a0044-9d4b-4888-819d-bf0fd8869505] Running
	I0624 03:25:14.828958    8244 system_pods.go:61] "kube-apiserver-addons-517800" [2fc276c0-6ad2-4b17-964b-bc5a2e7f1c9e] Running
	I0624 03:25:14.828958    8244 system_pods.go:61] "kube-controller-manager-addons-517800" [db592d09-98b2-47bf-955a-4e6d32208cb5] Running
	I0624 03:25:14.829491    8244 system_pods.go:61] "kube-ingress-dns-minikube" [0f9d8efb-ba9d-4dcd-834e-d3f9b47256f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0624 03:25:14.829530    8244 system_pods.go:61] "kube-proxy-njdhk" [05dfa044-5b98-4cdf-a20f-0589cee741b3] Running
	I0624 03:25:14.829530    8244 system_pods.go:61] "kube-scheduler-addons-517800" [35c3f0c1-bfaa-4c83-ab2c-59fd6ae14d88] Running
	I0624 03:25:14.829530    8244 system_pods.go:61] "metrics-server-c59844bb4-q5g7m" [d1ddb2d6-165e-4fa0-b8d4-bd2d32160acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0624 03:25:14.829530    8244 system_pods.go:61] "nvidia-device-plugin-daemonset-ltfsf" [2afb2f39-6132-4d8e-8b6f-344b68dcd8a1] Running
	I0624 03:25:14.829530    8244 system_pods.go:61] "registry-kkmh9" [a8ba7278-c4d6-454b-9ff6-2599925bf8f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0624 03:25:14.829530    8244 system_pods.go:61] "registry-proxy-4pllp" [c424816d-3d97-47f4-96b4-ee6359f55fbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0624 03:25:14.829530    8244 system_pods.go:61] "snapshot-controller-745499f584-79dwm" [6dce7ec9-4fd4-4f49-b6c6-1abe340c376c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0624 03:25:14.829597    8244 system_pods.go:61] "snapshot-controller-745499f584-n447c" [451bdbba-1fa3-465d-bff8-2b7994e759ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0624 03:25:14.829636    8244 system_pods.go:61] "storage-provisioner" [ed11ce1c-1ec8-4cd7-9c88-f0f4338d6045] Running
	I0624 03:25:14.829636    8244 system_pods.go:61] "tiller-deploy-6677d64bcd-sfcx8" [85b80422-f4d9-4038-ac34-1a41eef86170] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0624 03:25:14.829661    8244 system_pods.go:74] duration metric: took 20.2613ms to wait for pod list to return data ...
	I0624 03:25:14.829661    8244 default_sa.go:34] waiting for default service account to be created ...
	I0624 03:25:15.001598    8244 default_sa.go:45] found service account: "default"
	I0624 03:25:15.001712    8244 default_sa.go:55] duration metric: took 172.0501ms for default service account to be created ...
	I0624 03:25:15.001712    8244 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 03:25:15.167533    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:15.223344    8244 system_pods.go:86] 19 kube-system pods found
	I0624 03:25:15.223883    8244 system_pods.go:89] "coredns-7db6d8ff4d-67bql" [3ea28df4-1203-422c-8479-7f1e5c6d3adc] Running
	I0624 03:25:15.223883    8244 system_pods.go:89] "coredns-7db6d8ff4d-q4s6h" [e2ec9f01-e85c-4466-9b12-20393b831457] Running
	I0624 03:25:15.223953    8244 system_pods.go:89] "csi-hostpath-attacher-0" [1530eea4-cc9f-4764-8e39-5ffd7ea801df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0624 03:25:15.223953    8244 system_pods.go:89] "csi-hostpath-resizer-0" [ed9c0dae-31f4-481a-bb68-4df6865d5257] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0624 03:25:15.223953    8244 system_pods.go:89] "csi-hostpathplugin-qd89t" [c3a3a6ca-72ac-4e96-8c27-cbb73de786f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0624 03:25:15.223953    8244 system_pods.go:89] "etcd-addons-517800" [651a0044-9d4b-4888-819d-bf0fd8869505] Running
	I0624 03:25:15.223953    8244 system_pods.go:89] "kube-apiserver-addons-517800" [2fc276c0-6ad2-4b17-964b-bc5a2e7f1c9e] Running
	I0624 03:25:15.223953    8244 system_pods.go:89] "kube-controller-manager-addons-517800" [db592d09-98b2-47bf-955a-4e6d32208cb5] Running
	I0624 03:25:15.223953    8244 system_pods.go:89] "kube-ingress-dns-minikube" [0f9d8efb-ba9d-4dcd-834e-d3f9b47256f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0624 03:25:15.224063    8244 system_pods.go:89] "kube-proxy-njdhk" [05dfa044-5b98-4cdf-a20f-0589cee741b3] Running
	I0624 03:25:15.224063    8244 system_pods.go:89] "kube-scheduler-addons-517800" [35c3f0c1-bfaa-4c83-ab2c-59fd6ae14d88] Running
	I0624 03:25:15.224116    8244 system_pods.go:89] "metrics-server-c59844bb4-q5g7m" [d1ddb2d6-165e-4fa0-b8d4-bd2d32160acd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0624 03:25:15.224116    8244 system_pods.go:89] "nvidia-device-plugin-daemonset-ltfsf" [2afb2f39-6132-4d8e-8b6f-344b68dcd8a1] Running
	I0624 03:25:15.224116    8244 system_pods.go:89] "registry-kkmh9" [a8ba7278-c4d6-454b-9ff6-2599925bf8f1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0624 03:25:15.224116    8244 system_pods.go:89] "registry-proxy-4pllp" [c424816d-3d97-47f4-96b4-ee6359f55fbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0624 03:25:15.224116    8244 system_pods.go:89] "snapshot-controller-745499f584-79dwm" [6dce7ec9-4fd4-4f49-b6c6-1abe340c376c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0624 03:25:15.224116    8244 system_pods.go:89] "snapshot-controller-745499f584-n447c" [451bdbba-1fa3-465d-bff8-2b7994e759ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0624 03:25:15.224205    8244 system_pods.go:89] "storage-provisioner" [ed11ce1c-1ec8-4cd7-9c88-f0f4338d6045] Running
	I0624 03:25:15.224205    8244 system_pods.go:89] "tiller-deploy-6677d64bcd-sfcx8" [85b80422-f4d9-4038-ac34-1a41eef86170] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0624 03:25:15.224205    8244 system_pods.go:126] duration metric: took 222.4925ms to wait for k8s-apps to be running ...
	I0624 03:25:15.224205    8244 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 03:25:15.231152    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 03:25:15.265234    8244 system_svc.go:56] duration metric: took 41.0283ms WaitForService to wait for kubelet
	I0624 03:25:15.265234    8244 kubeadm.go:576] duration metric: took 55.9869035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 03:25:15.265234    8244 node_conditions.go:102] verifying NodePressure condition ...
	I0624 03:25:15.293413    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:15.296140    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:15.406101    8244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 03:25:15.406281    8244 node_conditions.go:123] node cpu capacity is 2
	I0624 03:25:15.406281    8244 node_conditions.go:105] duration metric: took 141.0466ms to run NodePressure ...
	I0624 03:25:15.406365    8244 start.go:240] waiting for startup goroutines ...
	I0624 03:25:15.691595    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:15.826268    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:15.826268    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:16.184285    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:16.304030    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:16.331265    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:16.652999    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:16.802680    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:16.804150    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:17.161848    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:17.295584    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:17.296170    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:17.653790    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:17.792229    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:17.796994    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:18.165428    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:18.304861    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:18.306164    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:18.660727    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:18.805175    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:18.809227    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:19.192503    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:19.292112    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:19.296726    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:19.666561    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:19.799111    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:19.799642    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:20.167740    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:20.296121    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:20.296452    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:20.661941    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:20.807152    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:20.808715    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:21.173003    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:21.312254    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:21.312254    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:21.650777    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:21.807869    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:21.811222    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:22.154307    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:22.292754    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:22.296466    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:22.653287    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:22.796719    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:22.800272    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:23.149940    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:23.298293    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:23.302453    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:23.654516    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:23.788644    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:23.793805    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:24.160000    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:24.302431    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:24.303204    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:24.649590    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:24.801758    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:24.802460    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:25.158463    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:25.297751    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:25.298304    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:25.670983    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:25.892068    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:25.892719    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:26.155327    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:26.298523    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:26.299086    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:26.650905    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:26.799398    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:26.800689    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:27.193390    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:27.494406    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:27.495260    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:27.654871    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:27.807089    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:27.807623    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:28.169893    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:28.314708    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:28.315050    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:28.664012    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:28.800075    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:28.807547    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0624 03:25:29.167847    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:29.289312    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:29.293648    8244 kapi.go:107] duration metric: took 35.5098108s to wait for kubernetes.io/minikube-addons=registry ...
	I0624 03:25:29.667211    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:29.799561    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:30.154580    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:30.291842    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:30.666309    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:30.798156    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:31.168086    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:31.289084    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:31.669159    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:31.797685    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:32.160844    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:32.290284    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:32.661017    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:32.803747    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:33.160247    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:33.306406    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:33.670376    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:33.802115    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:34.148428    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:34.303444    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:34.661627    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:34.799732    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:35.156433    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:35.294207    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:35.664454    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:35.791654    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:36.155310    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:36.294006    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:36.664939    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:36.808725    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:37.161398    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:37.300647    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:37.676485    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:37.795948    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:38.161143    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:38.301472    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:38.650937    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:38.795113    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:39.167897    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:39.300728    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:39.652834    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:39.793305    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:40.153964    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:40.295271    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:40.655268    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:40.795459    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:41.169706    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:41.308228    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:41.664385    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:41.806591    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:42.165653    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:42.309249    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:42.652225    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:42.799487    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:43.166801    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:43.307807    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:43.658877    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:43.802526    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:44.179541    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:44.308028    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:44.664800    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:44.795033    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:45.310412    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:45.310999    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:45.654031    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:45.806602    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:46.166093    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:46.305552    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:46.663353    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:46.805579    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:47.150106    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:47.306218    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:47.651617    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:47.797979    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:48.151926    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:48.315583    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:48.666814    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:48.792665    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:49.151416    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:49.308475    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:49.664166    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:49.798782    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:50.159724    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:50.307773    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:50.657296    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:50.803466    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:51.163316    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:51.297338    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:51.657515    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:51.809566    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:52.160631    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:52.292852    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:52.660103    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:52.805298    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:53.163351    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:53.290302    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:53.656918    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:53.845673    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:54.152539    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:54.299007    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:54.671200    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:54.804474    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:55.161110    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:55.329832    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:55.670881    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:55.804934    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:56.150209    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:56.299839    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:56.647482    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:56.790478    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:57.158597    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:57.295315    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:57.658001    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:57.789693    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:58.160928    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:58.293054    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:58.667072    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:58.794501    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:59.460496    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:25:59.460496    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:59.654241    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:25:59.800676    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:00.165251    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:00.293465    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:00.672193    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:00.802016    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:01.156965    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:01.292187    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:01.652589    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:01.804937    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:02.404135    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:02.410747    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:02.673549    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:02.801279    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:03.158220    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:03.300707    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:03.657072    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:03.793706    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:04.151374    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:04.291216    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:04.664391    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:04.806588    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:05.152712    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:05.296465    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:05.659864    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:05.798555    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:06.165322    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:06.302919    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:06.658897    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:06.795174    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:07.157638    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:07.294327    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:07.659507    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:07.807519    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:08.175250    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:08.296144    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:08.662153    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:08.793302    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:09.168145    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:09.311358    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:09.668799    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:09.798121    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:10.156126    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:10.307622    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:10.664689    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:10.796828    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:11.155229    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:11.299444    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:11.664626    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:11.807365    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:12.162641    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:12.290029    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:12.659523    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:12.790364    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:13.158245    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:13.299804    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:13.777936    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:13.815310    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:14.163704    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:14.298250    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:14.654311    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:14.799475    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:15.161335    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:15.298988    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:15.664508    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:15.800497    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:16.154787    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:16.302941    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:16.665686    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:16.791758    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:17.153409    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:17.308935    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:17.658989    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:17.801323    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:18.167364    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:18.292653    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:18.697270    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:18.800595    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:19.155131    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:19.291317    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:19.666698    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:19.790695    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:20.166831    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:20.291208    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:20.700006    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:20.808853    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:21.162406    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:21.305886    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:21.656273    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:21.797191    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:22.164279    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:22.297543    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:22.660317    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:22.790611    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:23.163758    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:23.301543    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:23.652648    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:23.798453    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:24.168426    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:24.291171    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:24.654835    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:24.803524    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:25.158579    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:25.306609    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:25.662829    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:25.802428    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:26.165291    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:26.297340    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:26.672144    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:26.796023    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:27.151333    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:27.291511    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:27.658276    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:27.796932    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:28.167693    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:28.299340    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:28.663840    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:28.805404    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:29.152450    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:29.307185    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:29.654851    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:29.802893    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:30.155009    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:30.291231    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:30.651883    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:30.805159    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:31.156315    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:31.293481    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:31.660494    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:31.799227    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:32.168015    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:32.290679    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:32.860038    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:32.867865    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:33.152084    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:33.310147    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:33.652113    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:33.805169    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:34.177684    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:34.302320    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:34.653434    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:34.795324    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:35.158903    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:35.316210    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:35.650473    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:35.788448    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:36.159458    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:36.302960    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:37.104687    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:37.106165    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:37.158700    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:37.292149    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:37.660540    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:37.793766    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:38.158771    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:38.288887    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:38.662453    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:38.796941    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:39.154586    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:39.301041    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:39.666266    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:39.805670    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:40.155197    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:40.302723    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:40.653179    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:40.807871    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:41.168713    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:41.331472    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:41.651590    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:41.790575    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:42.164865    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:42.308584    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:42.648479    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:42.805213    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:43.158917    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:43.304325    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:43.653949    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:43.796515    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:44.158991    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:44.311281    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:44.659906    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:44.807698    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:45.161781    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:45.300365    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:45.652573    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:45.789953    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:46.162725    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:46.293207    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:46.657955    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:46.789237    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:47.162589    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:47.304348    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:47.667904    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:47.790977    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:48.157953    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:48.296952    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:48.666882    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:48.804957    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:49.174314    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:49.305184    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:49.659371    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:49.810246    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:50.174346    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:50.303329    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:50.654731    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:50.797848    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:51.199884    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:51.300668    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:51.668307    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:51.796287    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:52.150204    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:52.289966    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:52.659434    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:52.899971    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:53.160802    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:53.302599    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:53.664381    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:53.793919    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:54.168469    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:54.294827    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:54.651490    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:54.792492    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:55.163833    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:55.300717    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:55.661412    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:55.808005    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:56.263367    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:56.292754    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:56.661833    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:56.803194    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:57.173845    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:57.305396    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:57.656800    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:57.805781    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:58.158603    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:58.309228    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:58.668265    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:58.794054    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:59.169262    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:59.297425    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:26:59.663239    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:26:59.798949    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:00.153850    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:00.298632    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:00.648575    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:00.809509    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:01.155577    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:01.295716    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:01.656493    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:01.801314    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:02.167521    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:02.310907    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:02.661128    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:02.797394    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:03.163859    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:03.298946    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:03.658872    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:03.794095    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:04.151912    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:04.309217    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:04.670200    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:04.789646    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:05.164684    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:05.290803    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:05.659961    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:05.791223    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:06.163480    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:06.295459    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:06.650499    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:06.797922    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:07.166169    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:07.290619    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:07.652365    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:07.800380    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:08.154487    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:08.304637    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:08.669263    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:08.800492    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:09.155619    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:09.296403    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:09.650069    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:09.789534    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:10.273553    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:10.294114    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:10.656796    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:10.790993    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:11.156208    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:11.304811    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:11.665023    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0624 03:27:11.792697    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:12.165412    8244 kapi.go:107] duration metric: took 2m14.0238555s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0624 03:27:12.296076    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:12.816436    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:13.302533    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:14.058626    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:14.395766    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:14.796943    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:15.293446    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:15.797749    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:16.302497    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:16.790851    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:17.307414    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:18.561684    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:18.571912    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:18.902860    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:19.292453    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:19.803043    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:20.296937    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:20.809933    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:21.306063    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:21.803159    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:22.292981    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:22.800529    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:24.215385    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:24.223318    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:24.568643    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:24.806369    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:25.301570    8244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0624 03:27:25.807456    8244 kapi.go:107] duration metric: took 2m32.0244942s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0624 03:27:44.512868    8244 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0624 03:27:44.512868    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:45.003105    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:45.513931    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:46.000751    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:46.501138    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:47.005023    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:47.501362    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:48.014197    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:48.508535    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:49.013907    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:49.511406    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:50.012452    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:50.507766    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:51.000119    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:51.508974    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:52.003378    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:52.506773    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:53.012611    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:53.514295    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:54.004906    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:54.504637    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:55.017084    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:55.500656    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:55.999302    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:56.498550    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:56.999460    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:57.501219    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:58.000820    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:58.501743    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:59.000816    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:27:59.512623    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:00.013365    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:00.501580    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:01.014014    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:01.511735    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:01.999950    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:02.511849    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:03.012760    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:03.522204    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:04.023539    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:04.501586    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:05.018018    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:05.516148    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:06.003371    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:06.512621    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:07.014003    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:07.513622    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:08.005588    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:08.503282    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:09.011131    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:09.510995    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:10.001969    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:10.512562    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:11.012622    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:11.501774    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:12.007856    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:12.509959    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:13.010944    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:13.501511    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:14.013856    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:14.509736    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:15.010870    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:15.526541    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:16.009482    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:16.514583    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:17.008889    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:17.512084    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:18.010364    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:18.508782    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:19.000710    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:19.508210    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:20.067924    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:20.502764    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:21.001938    8244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0624 03:28:21.504953    8244 kapi.go:107] duration metric: took 3m21.0112596s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0624 03:28:21.507595    8244 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-517800 cluster.
	I0624 03:28:21.514139    8244 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0624 03:28:21.518097    8244 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0624 03:28:21.521107    8244 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, volcano, nvidia-device-plugin, metrics-server, cloud-spanner, helm-tiller, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0624 03:28:21.525592    8244 addons.go:510] duration metric: took 4m2.2465168s for enable addons: enabled=[storage-provisioner ingress-dns volcano nvidia-device-plugin metrics-server cloud-spanner helm-tiller yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0624 03:28:21.525824    8244 start.go:245] waiting for cluster config update ...
	I0624 03:28:21.525824    8244 start.go:254] writing updated cluster config ...
	I0624 03:28:21.534850    8244 ssh_runner.go:195] Run: rm -f paused
	I0624 03:28:21.775543    8244 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0624 03:28:21.780240    8244 out.go:177] * Done! kubectl is now configured to use "addons-517800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 24 10:29:07 addons-517800 dockerd[1318]: time="2024-06-24T10:29:07.995347911Z" level=warning msg="failed to close stdin: task 4406b2312a790c0e755e0865b733a7695f167d097d6071c801ac085e25e169f5 not found: not found"
	Jun 24 10:29:08 addons-517800 cri-dockerd[1225]: time="2024-06-24T10:29:08Z" level=error msg="error getting RW layer size for container ID 'd98d3923f558972c9173b9c90ae0885165741dd099ec150f1e1415d0c6a676bb': Error response from daemon: No such container: d98d3923f558972c9173b9c90ae0885165741dd099ec150f1e1415d0c6a676bb"
	Jun 24 10:29:08 addons-517800 cri-dockerd[1225]: time="2024-06-24T10:29:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd98d3923f558972c9173b9c90ae0885165741dd099ec150f1e1415d0c6a676bb'"
	Jun 24 10:29:09 addons-517800 dockerd[1318]: time="2024-06-24T10:29:09.678135357Z" level=info msg="ignoring event" container=68fc0052b8eccc7a35de58ae357cd2aee539ad44caee1cc0dba4f8bd4b7459d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:29:09 addons-517800 dockerd[1324]: time="2024-06-24T10:29:09.678268757Z" level=info msg="shim disconnected" id=68fc0052b8eccc7a35de58ae357cd2aee539ad44caee1cc0dba4f8bd4b7459d8 namespace=moby
	Jun 24 10:29:09 addons-517800 dockerd[1324]: time="2024-06-24T10:29:09.678560156Z" level=warning msg="cleaning up after shim disconnected" id=68fc0052b8eccc7a35de58ae357cd2aee539ad44caee1cc0dba4f8bd4b7459d8 namespace=moby
	Jun 24 10:29:09 addons-517800 dockerd[1324]: time="2024-06-24T10:29:09.678655356Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:29:11 addons-517800 dockerd[1324]: time="2024-06-24T10:29:11.848912074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:29:11 addons-517800 dockerd[1324]: time="2024-06-24T10:29:11.849075373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:29:11 addons-517800 dockerd[1324]: time="2024-06-24T10:29:11.849113073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:29:11 addons-517800 dockerd[1324]: time="2024-06-24T10:29:11.849271073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:29:12 addons-517800 cri-dockerd[1225]: time="2024-06-24T10:29:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea9f69bbba26e00a24950faa03311b9da6b5bba78866b52d7444a9bc14cffe8b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.464350628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.464559228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.464639727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.464908126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.633515893Z" level=info msg="shim disconnected" id=3357ee3b270976ecdea1387bfeaaf63b3b4b7c265952aac65fdca3e6496b76c0 namespace=moby
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.633622893Z" level=warning msg="cleaning up after shim disconnected" id=3357ee3b270976ecdea1387bfeaaf63b3b4b7c265952aac65fdca3e6496b76c0 namespace=moby
	Jun 24 10:29:12 addons-517800 dockerd[1324]: time="2024-06-24T10:29:12.633642693Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:29:12 addons-517800 dockerd[1318]: time="2024-06-24T10:29:12.635882586Z" level=info msg="ignoring event" container=3357ee3b270976ecdea1387bfeaaf63b3b4b7c265952aac65fdca3e6496b76c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:29:15 addons-517800 dockerd[1324]: time="2024-06-24T10:29:15.011492932Z" level=info msg="shim disconnected" id=ea9f69bbba26e00a24950faa03311b9da6b5bba78866b52d7444a9bc14cffe8b namespace=moby
	Jun 24 10:29:15 addons-517800 dockerd[1324]: time="2024-06-24T10:29:15.011574531Z" level=warning msg="cleaning up after shim disconnected" id=ea9f69bbba26e00a24950faa03311b9da6b5bba78866b52d7444a9bc14cffe8b namespace=moby
	Jun 24 10:29:15 addons-517800 dockerd[1324]: time="2024-06-24T10:29:15.011589431Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:29:15 addons-517800 dockerd[1318]: time="2024-06-24T10:29:15.012126829Z" level=info msg="ignoring event" container=ea9f69bbba26e00a24950faa03311b9da6b5bba78866b52d7444a9bc14cffe8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:29:15 addons-517800 dockerd[1324]: time="2024-06-24T10:29:15.035052046Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:29:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	4406b2312a790       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          11 seconds ago       Exited              helm-test                                0                   68fc0052b8ecc       helm-test
	87953fd21fd10       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                                              20 seconds ago       Exited              busybox                                  0                   44e5f5fba8b5c       test-local-path
	c7bf9d3080984       ghcr.io/headlamp-k8s/headlamp@sha256:c48d3702275225be765218b1caffea7fc514ed31bc11533af71ffd1ee6f2fde1                                        27 seconds ago       Running             headlamp                                 0                   88ce974653968       headlamp-7fc69f7444-4xxnw
	8221b0ada89bf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:40402d51273ea7d281392557096333b5f62316a684f9bc9252214243840f757e                            40 seconds ago       Exited              gadget                                   4                   8d0b2cd2bef82       gadget-rgrdr
	bf32efb102a1e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 59 seconds ago       Running             gcp-auth                                 0                   0996af8efeac2       gcp-auth-5db96cd9b4-nr6t5
	aff78e5d5ba2d       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   0308488e179b7       ingress-nginx-controller-768f948f8f-z2r2h
	ec710b327bf06       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   490990d30ca6e       volcano-admission-7b497cf95b-fxrtv
	341f1b81d6574       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago        Running             csi-snapshotter                          0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	efcfe344464ff       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          2 minutes ago        Running             csi-provisioner                          0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	012bf1e9ca0a6       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	239d893c13d70       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	cb14c23b87d52       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	701f08704e2e6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   c411b1903673a       csi-hostpath-resizer-0
	47dfbf1dbf971       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   9e03d3146baf9       csi-hostpathplugin-qd89t
	15ae0135ee6ad       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   da3f27ad65b28       csi-hostpath-attacher-0
	d0a4fdfa77f09       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   8babb166ddf8f       volcano-scheduler-765f888978-s7h8j
	4928132e89846       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   0295c9ba819ea       volcano-controller-86c5446455-bw9lf
	7a1ca4ee99939       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   5dc4f7a2e9a22       ingress-nginx-admission-patch-mctwq
	e2e78b63bda1c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   50d4ab97285e4       ingress-nginx-admission-create-7wsxr
	1f3fea1a67b85       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   08eb653eba703       snapshot-controller-745499f584-n447c
	dc76ba245f9fb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   a98396420cced       snapshot-controller-745499f584-79dwm
	05fd91472b6df       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   07e6bf1c55bfe       local-path-provisioner-8d985888d-kc75s
	6e4480b2222f0       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   b0c607a53e253       yakd-dashboard-5ddbf7d777-pwqfl
	5bb7e62ed270e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   37271d02c67aa       tiller-deploy-6677d64bcd-sfcx8
	68c0f8f7f4278       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   9eae69738cdf3       metrics-server-c59844bb4-q5g7m
	7a7e8d80c6e2e       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   94e3796b3eb2e       cloud-spanner-emulator-6fcd4f6f98-n4lcp
	45278175623ff       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   e5b39da2544eb       kube-ingress-dns-minikube
	9cba7c26d9711       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   070a6c26fe67d       storage-provisioner
	0e5753d81b605       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   9efe48a99c8c6       coredns-7db6d8ff4d-67bql
	e6d333af7f5b8       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   12a18f76329d6       coredns-7db6d8ff4d-q4s6h
	abeabc9dfd868       53c535741fb44                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   7098012db4dd1       kube-proxy-njdhk
	1455f540d162b       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   878c1f659d211       etcd-addons-517800
	585b5894df828       56ce0fd9fb532                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   39bcb251bad40       kube-apiserver-addons-517800
	45ab8d38fda96       e874818b3caac                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   b76aa8ba1b27e       kube-controller-manager-addons-517800
	abecdd9a2ae5a       7820c83aa1394                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   804ad3097a91d       kube-scheduler-addons-517800
	
	
	==> controller_ingress [aff78e5d5ba2] <==
	I0624 10:27:24.891910       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.2" state="clean" commit="39683505b630ff2121012f3c5b16215a1449d5ed" platform="linux/amd64"
	I0624 10:27:25.253653       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0624 10:27:25.288130       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0624 10:27:25.311084       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0624 10:27:25.339021       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"8c2940be-1ff2-4bd4-9cab-22efa5ef7db0", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0624 10:27:25.351097       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0c820a69-3daa-4855-9500-d07c89625b23", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0624 10:27:25.351138       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"8bfc11df-b695-45d8-9418-a33450a2d584", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0624 10:27:26.514815       7 nginx.go:307] "Starting NGINX process"
	I0624 10:27:26.515201       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0624 10:27:26.516339       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0624 10:27:26.516978       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0624 10:27:26.534170       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0624 10:27:26.534494       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-z2r2h"
	I0624 10:27:26.539191       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-z2r2h" node="addons-517800"
	I0624 10:27:26.569987       7 controller.go:210] "Backend successfully reloaded"
	I0624 10:27:26.570330       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0624 10:27:26.570775       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-z2r2h", UID:"c37e6dbc-a4fb-455a-9ade-378b8f31d926", APIVersion:"v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	NGINX Ingress controller
	  Release:       v1.10.1
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [0e5753d81b60] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1671804444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (24-Jun-2024 10:24:33.301) (total time: 30000ms):
	Trace[1671804444]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:25:03.302)
	Trace[1671804444]: [30.000370327s] [30.000370327s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2004783054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (24-Jun-2024 10:24:33.302) (total time: 30008ms):
	Trace[2004783054]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30007ms (10:25:03.309)
	Trace[2004783054]: [30.008728578s] [30.008728578s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.6:55352 - 55466 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000292799s
	[INFO] 10.244.0.6:55352 - 64943 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000208699s
	[INFO] 10.244.0.6:40894 - 18827 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097399s
	[INFO] 10.244.0.6:40894 - 9097 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000207899s
	[INFO] 10.244.0.6:50325 - 35023 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092999s
	[INFO] 10.244.0.6:50325 - 51658 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000584s
	[INFO] 10.244.0.6:33889 - 3847 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000153599s
	[INFO] 10.244.0.6:33889 - 7425 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103999s
	[INFO] 10.244.0.26:58369 - 57256 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000571099s
	[INFO] 10.244.0.26:57827 - 33093 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001287s
	[INFO] 10.244.0.26:46985 - 16951 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154799s
	[INFO] 10.244.0.26:35016 - 15125 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118799s
	[INFO] 10.244.0.26:43330 - 9591 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002379094s
	[INFO] 10.244.0.27:38899 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000315899s
	
	
	==> coredns [e6d333af7f5b] <==
	[INFO] plugin/kubernetes: Trace[81608012]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (24-Jun-2024 10:24:33.167) (total time: 30004ms):
	Trace[81608012]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:25:03.169)
	Trace[81608012]: [30.004025422s] [30.004025422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[655751853]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (24-Jun-2024 10:24:33.165) (total time: 30005ms):
	Trace[655751853]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:25:03.167)
	Trace[655751853]: [30.005314603s] [30.005314603s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56789 - 17690 "HINFO IN 4209327174520598644.6219083007247756827. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.143364515s
	[INFO] 10.244.0.6:54131 - 8713 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000403597s
	[INFO] 10.244.0.6:54131 - 40963 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000262398s
	[INFO] 10.244.0.6:48154 - 46568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139799s
	[INFO] 10.244.0.6:48154 - 8170 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000318898s
	[INFO] 10.244.0.6:37890 - 43746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152099s
	[INFO] 10.244.0.6:37890 - 63981 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045399s
	[INFO] 10.244.0.6:59165 - 40529 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000248999s
	[INFO] 10.244.0.6:59165 - 28759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134199s
	[INFO] 10.244.0.26:38870 - 24332 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000413299s
	[INFO] 10.244.0.26:50512 - 24085 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000289399s
	[INFO] 10.244.0.26:48799 - 4582 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002109994s
	[INFO] 10.244.0.27:57929 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000298499s
	
	
	==> describe nodes <==
	Name:               addons-517800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-517800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=addons-517800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T03_24_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-517800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-517800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 10:24:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-517800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 10:29:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 10:29:12 +0000   Mon, 24 Jun 2024 10:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 10:29:12 +0000   Mon, 24 Jun 2024 10:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 10:29:12 +0000   Mon, 24 Jun 2024 10:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 10:29:12 +0000   Mon, 24 Jun 2024 10:24:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.209.187
	  Hostname:    addons-517800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 508d1d1948914e88ac7ea86775950a26
	  System UUID:                02409cf8-b0a7-074e-bae7-c63747a15830
	  Boot ID:                    15e42ab3-6d5a-423b-8bcb-d5a20ca02d2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-n4lcp      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  gadget                      gadget-rgrdr                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  gcp-auth                    gcp-auth-5db96cd9b4-nr6t5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  headlamp                    headlamp-7fc69f7444-4xxnw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-z2r2h    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m25s
	  kube-system                 coredns-7db6d8ff4d-67bql                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m56s
	  kube-system                 coredns-7db6d8ff4d-q4s6h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 csi-hostpathplugin-qd89t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-517800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-apiserver-addons-517800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-addons-517800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-njdhk                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-addons-517800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 metrics-server-c59844bb4-q5g7m               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 snapshot-controller-745499f584-79dwm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 snapshot-controller-745499f584-n447c         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 tiller-deploy-6677d64bcd-sfcx8               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  local-path-storage          local-path-provisioner-8d985888d-kc75s       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  volcano-system              volcano-admission-7b497cf95b-fxrtv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-controller-86c5446455-bw9lf          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  volcano-system              volcano-scheduler-765f888978-s7h8j           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-pwqfl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  0 (0%!)(MISSING)
	  memory             658Mi (17%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m44s  kube-proxy       
	  Normal  Starting                 5m13s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m13s  kubelet          Node addons-517800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s  kubelet          Node addons-517800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s  kubelet          Node addons-517800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m10s  kubelet          Node addons-517800 status is now: NodeReady
	  Normal  RegisteredNode           4m59s  node-controller  Node addons-517800 event: Registered Node addons-517800 in Controller
	
	
	==> dmesg <==
	[  +0.086745] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.928921] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.133167] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.142317] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.049667] kauditd_printk_skb: 99 callbacks suppressed
	[Jun24 10:25] kauditd_printk_skb: 39 callbacks suppressed
	[ +23.143705] kauditd_printk_skb: 4 callbacks suppressed
	[Jun24 10:26] kauditd_printk_skb: 29 callbacks suppressed
	[ +19.877215] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.091687] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.877935] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.437751] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.958214] kauditd_printk_skb: 34 callbacks suppressed
	[Jun24 10:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.085182] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.018363] kauditd_printk_skb: 33 callbacks suppressed
	[Jun24 10:28] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.644865] kauditd_printk_skb: 40 callbacks suppressed
	[ +18.167948] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.788823] kauditd_printk_skb: 59 callbacks suppressed
	[ +10.545309] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.122622] kauditd_printk_skb: 24 callbacks suppressed
	[Jun24 10:29] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.971099] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.395492] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [1455f540d162] <==
	{"level":"warn","ts":"2024-06-24T10:28:45.261889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.840541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4360"}
	{"level":"info","ts":"2024-06-24T10:28:45.262012Z","caller":"traceutil/trace.go:171","msg":"trace[141424003] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1641; }","duration":"116.983241ms","start":"2024-06-24T10:28:45.145014Z","end":"2024-06-24T10:28:45.261997Z","steps":["trace[141424003] 'agreement among raft nodes before linearized reading'  (duration: 116.788442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:49.448876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.498178ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/my-volcano/test-job-37187fe8-08ae-4bc3-9c41-aa5e89a7fea5.17dbe9f4f6c32f65\" ","response":"range_response_count:1 size:816"}
	{"level":"info","ts":"2024-06-24T10:28:49.449019Z","caller":"traceutil/trace.go:171","msg":"trace[651854363] range","detail":"{range_begin:/registry/events/my-volcano/test-job-37187fe8-08ae-4bc3-9c41-aa5e89a7fea5.17dbe9f4f6c32f65; range_end:; response_count:1; response_revision:1662; }","duration":"158.651173ms","start":"2024-06-24T10:28:49.290353Z","end":"2024-06-24T10:28:49.449004Z","steps":["trace[651854363] 'range keys from in-memory index tree'  (duration: 157.336878ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:49.874027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.889075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-24T10:28:49.874113Z","caller":"traceutil/trace.go:171","msg":"trace[1679201479] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1663; }","duration":"133.025875ms","start":"2024-06-24T10:28:49.74107Z","end":"2024-06-24T10:28:49.874096Z","steps":["trace[1679201479] 'range keys from in-memory index tree'  (duration: 130.166185ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T10:28:50.111254Z","caller":"traceutil/trace.go:171","msg":"trace[861373927] linearizableReadLoop","detail":"{readStateIndex:1743; appliedIndex:1742; }","duration":"187.310459ms","start":"2024-06-24T10:28:49.923925Z","end":"2024-06-24T10:28:50.111235Z","steps":["trace[861373927] 'read index received'  (duration: 187.138459ms)","trace[861373927] 'applied index is now lower than readState.Index'  (duration: 171.3µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-24T10:28:50.113885Z","caller":"traceutil/trace.go:171","msg":"trace[958778385] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"219.593231ms","start":"2024-06-24T10:28:49.894277Z","end":"2024-06-24T10:28:50.113871Z","steps":["trace[958778385] 'process raft request'  (duration: 216.831542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.114317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.372147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:2867"}
	{"level":"info","ts":"2024-06-24T10:28:50.11459Z","caller":"traceutil/trace.go:171","msg":"trace[1922684539] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1664; }","duration":"190.684346ms","start":"2024-06-24T10:28:49.923896Z","end":"2024-06-24T10:28:50.11458Z","steps":["trace[1922684539] 'agreement among raft nodes before linearized reading'  (duration: 190.324347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.11521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.206751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2024-06-24T10:28:50.115432Z","caller":"traceutil/trace.go:171","msg":"trace[419859252] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1664; }","duration":"189.44525ms","start":"2024-06-24T10:28:49.925978Z","end":"2024-06-24T10:28:50.115423Z","steps":["trace[419859252] 'agreement among raft nodes before linearized reading'  (duration: 189.158251ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.115864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.802462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5\" ","response":"range_response_count:1 size:4206"}
	{"level":"info","ts":"2024-06-24T10:28:50.117712Z","caller":"traceutil/trace.go:171","msg":"trace[1588384912] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5; range_end:; response_count:1; response_revision:1664; }","duration":"137.676755ms","start":"2024-06-24T10:28:49.980025Z","end":"2024-06-24T10:28:50.117701Z","steps":["trace[1588384912] 'agreement among raft nodes before linearized reading'  (duration: 135.788163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.118064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.214882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-06-24T10:28:50.118338Z","caller":"traceutil/trace.go:171","msg":"trace[460852988] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1664; }","duration":"156.515181ms","start":"2024-06-24T10:28:49.961813Z","end":"2024-06-24T10:28:50.118328Z","steps":["trace[460852988] 'agreement among raft nodes before linearized reading'  (duration: 156.160582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.408909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.222245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-06-24T10:28:50.408971Z","caller":"traceutil/trace.go:171","msg":"trace[1301796154] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1665; }","duration":"140.327644ms","start":"2024-06-24T10:28:50.26863Z","end":"2024-06-24T10:28:50.408958Z","steps":["trace[1301796154] 'range keys from in-memory index tree'  (duration: 139.958346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.506526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.522998ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15038803702228055808 > lease_revoke:<id:50b49049c621f5cb>","response":"size:29"}
	{"level":"info","ts":"2024-06-24T10:28:50.5073Z","caller":"traceutil/trace.go:171","msg":"trace[1050800004] linearizableReadLoop","detail":"{readStateIndex:1745; appliedIndex:1744; }","duration":"214.302252ms","start":"2024-06-24T10:28:50.292979Z","end":"2024-06-24T10:28:50.507281Z","steps":["trace[1050800004] 'read index received'  (duration: 111.948857ms)","trace[1050800004] 'applied index is now lower than readState.Index'  (duration: 102.351295ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-24T10:28:50.507694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.607951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/my-volcano/test-job-37187fe8-08ae-4bc3-9c41-aa5e89a7fea5.17dbe9f4f6c32f65\" ","response":"range_response_count:1 size:816"}
	{"level":"info","ts":"2024-06-24T10:28:50.507743Z","caller":"traceutil/trace.go:171","msg":"trace[884307230] range","detail":"{range_begin:/registry/events/my-volcano/test-job-37187fe8-08ae-4bc3-9c41-aa5e89a7fea5.17dbe9f4f6c32f65; range_end:; response_count:1; response_revision:1666; }","duration":"214.786749ms","start":"2024-06-24T10:28:50.292946Z","end":"2024-06-24T10:28:50.507733Z","steps":["trace[884307230] 'agreement among raft nodes before linearized reading'  (duration: 214.423351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T10:28:50.902131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.806447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-06-24T10:28:50.902437Z","caller":"traceutil/trace.go:171","msg":"trace[344496485] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1667; }","duration":"165.149746ms","start":"2024-06-24T10:28:50.73727Z","end":"2024-06-24T10:28:50.90242Z","steps":["trace[344496485] 'range keys from in-memory index tree'  (duration: 164.558649ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T10:28:50.957432Z","caller":"traceutil/trace.go:171","msg":"trace[1128600384] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"154.758788ms","start":"2024-06-24T10:28:50.802655Z","end":"2024-06-24T10:28:50.957413Z","steps":["trace[1128600384] 'process raft request'  (duration: 154.22309ms)"],"step_count":1}
	
	
	==> gcp-auth [bf32efb102a1] <==
	2024/06/24 10:28:20 GCP Auth Webhook started!
	2024/06/24 10:28:34 Ready to marshal response ...
	2024/06/24 10:28:34 Ready to write response ...
	2024/06/24 10:28:37 Ready to marshal response ...
	2024/06/24 10:28:37 Ready to write response ...
	2024/06/24 10:28:37 Ready to marshal response ...
	2024/06/24 10:28:37 Ready to write response ...
	2024/06/24 10:28:37 Ready to marshal response ...
	2024/06/24 10:28:37 Ready to write response ...
	2024/06/24 10:28:39 Ready to marshal response ...
	2024/06/24 10:28:39 Ready to write response ...
	2024/06/24 10:28:40 Ready to marshal response ...
	2024/06/24 10:28:40 Ready to write response ...
	2024/06/24 10:28:48 Ready to marshal response ...
	2024/06/24 10:28:48 Ready to write response ...
	2024/06/24 10:28:48 Ready to marshal response ...
	2024/06/24 10:28:48 Ready to write response ...
	2024/06/24 10:29:02 Ready to marshal response ...
	2024/06/24 10:29:02 Ready to write response ...
	2024/06/24 10:29:11 Ready to marshal response ...
	2024/06/24 10:29:11 Ready to write response ...
	
	
	==> kernel <==
	 10:29:18 up 7 min,  0 users,  load average: 1.40, 1.98, 1.04
	Linux addons-517800 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [585b5894df82] <==
	Trace[61251201]: [552.15185ms] [552.15185ms] END
	I0624 10:27:18.567024       1 trace.go:236] Trace[405920751]: "List" accept:application/json, */*,audit-id:6722332a-0d9a-4e21-b83e-6e39fce61002,client:172.31.208.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (24-Jun-2024 10:27:17.796) (total time: 770ms):
	Trace[405920751]: ["List(recursive=true) etcd3" audit-id:6722332a-0d9a-4e21-b83e-6e39fce61002,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 770ms (10:27:17.796)]
	Trace[405920751]: [770.456825ms] [770.456825ms] END
	W0624 10:27:18.695126       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.239.127:443: connect: connection refused
	W0624 10:27:19.750431       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.239.127:443: connect: connection refused
	W0624 10:27:20.756631       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.239.127:443: connect: connection refused
	W0624 10:27:21.776113       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.239.127:443: connect: connection refused
	W0624 10:27:22.845895       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.239.127:443: connect: connection refused
	I0624 10:27:24.218080       1 trace.go:236] Trace[1533702097]: "List" accept:application/json, */*,audit-id:db2e0c3b-aff8-4d79-9f57-1d0e745d357b,client:172.31.208.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (24-Jun-2024 10:27:23.512) (total time: 705ms):
	Trace[1533702097]: ["List(recursive=true) etcd3" audit-id:db2e0c3b-aff8-4d79-9f57-1d0e745d357b,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 705ms (10:27:23.512)]
	Trace[1533702097]: [705.227443ms] [705.227443ms] END
	I0624 10:27:24.218282       1 trace.go:236] Trace[493596473]: "List" accept:application/json, */*,audit-id:f1e63922-5056-488e-a84e-13347f8b4abd,client:172.31.208.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (24-Jun-2024 10:27:23.303) (total time: 912ms):
	Trace[493596473]: ["List(recursive=true) etcd3" audit-id:f1e63922-5056-488e-a84e-13347f8b4abd,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 914ms (10:27:23.303)]
	Trace[493596473]: [912.25147ms] [912.25147ms] END
	W0624 10:27:44.375603       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	E0624 10:27:44.375730       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	W0624 10:28:03.481109       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	E0624 10:28:03.481263       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	W0624 10:28:03.526269       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	E0624 10:28:03.526331       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.166.108:443: connect: connection refused
	I0624 10:28:37.532383       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.53.92"}
	I0624 10:28:39.630917       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0624 10:28:39.789952       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	E0624 10:29:07.948380       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 172.31.209.187:8443->10.244.0.31:43610: read: connection reset by peer
	
	
	==> kube-controller-manager [45ab8d38fda9] <==
	I0624 10:28:06.907287       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:06.923484       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:07.608981       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:07.629954       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:07.922095       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:07.940254       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:07.951156       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:07.957991       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:07.960587       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:07.969190       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:21.156412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="21.893141ms"
	I0624 10:28:21.157077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="29.1µs"
	I0624 10:28:37.034939       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:37.050154       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:37.201265       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0624 10:28:37.207943       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0624 10:28:37.706988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="107.942955ms"
	I0624 10:28:37.751652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="44.263659ms"
	I0624 10:28:37.800983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="42.308365ms"
	I0624 10:28:37.801050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="34.6µs"
	I0624 10:28:39.100085       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	I0624 10:28:51.682458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="58.3µs"
	I0624 10:28:51.739205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="19.173924ms"
	I0624 10:28:51.739635       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="277.399µs"
	I0624 10:28:56.984057       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="13.9µs"
	
	
	==> kube-proxy [abeabc9dfd86] <==
	I0624 10:24:33.275088       1 server_linux.go:69] "Using iptables proxy"
	I0624 10:24:33.440118       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.209.187"]
	I0624 10:24:33.661647       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 10:24:33.661797       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 10:24:33.661832       1 server_linux.go:165] "Using iptables Proxier"
	I0624 10:24:33.692938       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 10:24:33.693254       1 server.go:872] "Version info" version="v1.30.2"
	I0624 10:24:33.693276       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 10:24:33.722208       1 config.go:192] "Starting service config controller"
	I0624 10:24:33.722284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 10:24:33.722395       1 config.go:101] "Starting endpoint slice config controller"
	I0624 10:24:33.722414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 10:24:33.738903       1 config.go:319] "Starting node config controller"
	I0624 10:24:33.738933       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 10:24:33.834384       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 10:24:33.834493       1 shared_informer.go:320] Caches are synced for service config
	I0624 10:24:33.850643       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [abecdd9a2ae5] <==
	W0624 10:24:03.522606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 10:24:03.522657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 10:24:03.580899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0624 10:24:03.581052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0624 10:24:03.670134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0624 10:24:03.670194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0624 10:24:03.672848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0624 10:24:03.673001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0624 10:24:03.769332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0624 10:24:03.771748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0624 10:24:03.818254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0624 10:24:03.818532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0624 10:24:03.838070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 10:24:03.838380       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 10:24:03.845388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0624 10:24:03.845437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0624 10:24:03.927344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0624 10:24:03.927419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0624 10:24:03.964411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0624 10:24:03.964462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0624 10:24:03.986534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0624 10:24:03.986602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0624 10:24:04.071946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0624 10:24:04.072292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 10:24:06.002489       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 24 10:29:10 addons-517800 kubelet[2098]: I0624 10:29:10.030750    2098 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rdw9f\" (UniqueName: \"kubernetes.io/projected/1edaa77a-b52f-44d6-8870-4e6e4b16b5e0-kube-api-access-rdw9f\") on node \"addons-517800\" DevicePath \"\""
	Jun 24 10:29:10 addons-517800 kubelet[2098]: I0624 10:29:10.531517    2098 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fc0052b8eccc7a35de58ae357cd2aee539ad44caee1cc0dba4f8bd4b7459d8"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.269378    2098 topology_manager.go:215] "Topology Admit Handler" podUID="b8802052-0244-4a54-accd-90cf14d1722c" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: E0624 10:29:11.270114    2098 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1edaa77a-b52f-44d6-8870-4e6e4b16b5e0" containerName="helm-test"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.270248    2098 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edaa77a-b52f-44d6-8870-4e6e4b16b5e0" containerName="helm-test"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.346248    2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b8802052-0244-4a54-accd-90cf14d1722c-script\") pod \"helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") " pod="local-path-storage/helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.346574    2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q588g\" (UniqueName: \"kubernetes.io/projected/b8802052-0244-4a54-accd-90cf14d1722c-kube-api-access-q588g\") pod \"helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") " pod="local-path-storage/helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.346771    2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-gcp-creds\") pod \"helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") " pod="local-path-storage/helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.346814    2098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-data\") pod \"helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") " pod="local-path-storage/helper-pod-delete-pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.813307    2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edaa77a-b52f-44d6-8870-4e6e4b16b5e0" path="/var/lib/kubelet/pods/1edaa77a-b52f-44d6-8870-4e6e4b16b5e0/volumes"
	Jun 24 10:29:11 addons-517800 kubelet[2098]: I0624 10:29:11.814095    2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d384061-a2cd-496a-82a8-25e4104fafdb" path="/var/lib/kubelet/pods/2d384061-a2cd-496a-82a8-25e4104fafdb/volumes"
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.287910    2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b8802052-0244-4a54-accd-90cf14d1722c-script\") pod \"b8802052-0244-4a54-accd-90cf14d1722c\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") "
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.288489    2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-gcp-creds\") pod \"b8802052-0244-4a54-accd-90cf14d1722c\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") "
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.288594    2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-data\") pod \"b8802052-0244-4a54-accd-90cf14d1722c\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") "
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.288787    2098 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q588g\" (UniqueName: \"kubernetes.io/projected/b8802052-0244-4a54-accd-90cf14d1722c-kube-api-access-q588g\") pod \"b8802052-0244-4a54-accd-90cf14d1722c\" (UID: \"b8802052-0244-4a54-accd-90cf14d1722c\") "
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.288498    2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8802052-0244-4a54-accd-90cf14d1722c-script" (OuterVolumeSpecName: "script") pod "b8802052-0244-4a54-accd-90cf14d1722c" (UID: "b8802052-0244-4a54-accd-90cf14d1722c"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.288529    2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b8802052-0244-4a54-accd-90cf14d1722c" (UID: "b8802052-0244-4a54-accd-90cf14d1722c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.289013    2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-data" (OuterVolumeSpecName: "data") pod "b8802052-0244-4a54-accd-90cf14d1722c" (UID: "b8802052-0244-4a54-accd-90cf14d1722c"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.295868    2098 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8802052-0244-4a54-accd-90cf14d1722c-kube-api-access-q588g" (OuterVolumeSpecName: "kube-api-access-q588g") pod "b8802052-0244-4a54-accd-90cf14d1722c" (UID: "b8802052-0244-4a54-accd-90cf14d1722c"). InnerVolumeSpecName "kube-api-access-q588g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.389510    2098 reconciler_common.go:289] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b8802052-0244-4a54-accd-90cf14d1722c-script\") on node \"addons-517800\" DevicePath \"\""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.389571    2098 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-gcp-creds\") on node \"addons-517800\" DevicePath \"\""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.389585    2098 reconciler_common.go:289] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b8802052-0244-4a54-accd-90cf14d1722c-data\") on node \"addons-517800\" DevicePath \"\""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.389598    2098 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q588g\" (UniqueName: \"kubernetes.io/projected/b8802052-0244-4a54-accd-90cf14d1722c-kube-api-access-q588g\") on node \"addons-517800\" DevicePath \"\""
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.794410    2098 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8802052-0244-4a54-accd-90cf14d1722c" path="/var/lib/kubelet/pods/b8802052-0244-4a54-accd-90cf14d1722c/volumes"
	Jun 24 10:29:15 addons-517800 kubelet[2098]: I0624 10:29:15.911392    2098 scope.go:117] "RemoveContainer" containerID="3357ee3b270976ecdea1387bfeaaf63b3b4b7c265952aac65fdca3e6496b76c0"
	
	
	==> storage-provisioner [9cba7c26d971] <==
	I0624 10:24:51.038304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0624 10:24:51.115189       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0624 10:24:51.131857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0624 10:24:51.388053       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0624 10:24:51.388233       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-517800_6d59ef9e-4051-477e-b940-10f175cea3ee!
	I0624 10:24:51.388340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db32646f-6e4b-48d2-b5bb-c54e622865a2", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-517800_6d59ef9e-4051-477e-b940-10f175cea3ee became leader
	I0624 10:24:51.593940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-517800_6d59ef9e-4051-477e-b940-10f175cea3ee!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:29:09.645939    9180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-517800 -n addons-517800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-517800 -n addons-517800: (12.6014916s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-517800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-7wsxr ingress-nginx-admission-patch-mctwq test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-517800 describe pod ingress-nginx-admission-create-7wsxr ingress-nginx-admission-patch-mctwq test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-517800 describe pod ingress-nginx-admission-create-7wsxr ingress-nginx-admission-patch-mctwq test-job-nginx-0: exit status 1 (158.9522ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7wsxr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mctwq" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-517800 describe pod ingress-nginx-admission-create-7wsxr ingress-nginx-admission-patch-mctwq test-job-nginx-0: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.44s)

                                                
                                    
x
+
TestErrorSpam/setup (189.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-998200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-998200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 --driver=hyperv: (3m9.3527461s)
error_spam_test.go:96: unexpected stderr: "W0624 03:33:41.363929    2700 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-998200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19124
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-998200" primary control-plane node in "nospam-998200" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-998200" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0624 03:33:41.363929    2700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (189.35s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (280.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-094900 --alsologtostderr -v=8
E0624 03:43:21.851639     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:43:49.678739     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-094900 --alsologtostderr -v=8: exit status 90 (2m27.7786729s)

                                                
                                                
-- stdout --
	* [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	* Updating the running hyperv "functional-094900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:43:09.441439   13548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0624 03:43:09.447962   13548 out.go:291] Setting OutFile to fd 664 ...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.448740   13548 out.go:304] Setting ErrFile to fd 1000...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.468761   13548 out.go:298] Setting JSON to false
	I0624 03:43:09.477501   13548 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16244,"bootTime":1719209544,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:43:09.479473   13548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:09.486094   13548 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:43:09.494437   13548 notify.go:220] Checking for updates...
	I0624 03:43:09.497049   13548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:43:09.499404   13548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:09.501645   13548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:43:09.506090   13548 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:09.508471   13548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:09.512353   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:09.512353   13548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:14.693063   13548 out.go:177] * Using the hyperv driver based on existing profile
	I0624 03:43:14.698333   13548 start.go:297] selected driver: hyperv
	I0624 03:43:14.698333   13548 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.698672   13548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:14.748082   13548 cni.go:84] Creating CNI manager for ""
	I0624 03:43:14.748082   13548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:14.748576   13548 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.749343   13548 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:14.755811   13548 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 03:43:14.758579   13548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:14.758579   13548 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:43:14.758579   13548 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:14.758579   13548 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:43:14.758579   13548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:14.758579   13548 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 03:43:14.762017   13548 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:14.762017   13548 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 03:43:14.763813   13548 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:14.763813   13548 fix.go:54] fixHost starting: 
	I0624 03:43:14.764063   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:17.494798   13548 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 03:43:17.494798   13548 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:17.498717   13548 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 03:43:17.501376   13548 machine.go:94] provisionDockerMachine start ...
	I0624 03:43:17.501582   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:19.660834   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:22.147106   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:22.157990   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:22.163861   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:22.164603   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:22.164603   13548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:43:22.312573   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:22.312648   13548 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 03:43:22.312754   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:24.365878   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:26.848844   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:26.860297   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:26.866464   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:26.867078   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:26.867078   13548 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 03:43:27.028071   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:27.028071   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:29.110895   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:31.664830   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:31.665356   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:31.665356   13548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:43:31.803954   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:31.803954   13548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:43:31.803954   13548 buildroot.go:174] setting up certificates
	I0624 03:43:31.803954   13548 provision.go:84] configureAuth start
	I0624 03:43:31.803954   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:33.909848   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:33.911457   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:33.911566   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:36.371938   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:38.422770   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:40.845838   13548 provision.go:143] copyHostCerts
	I0624 03:43:40.846031   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 03:43:40.846302   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 03:43:40.846398   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 03:43:40.846882   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:43:40.848489   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 03:43:40.848828   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 03:43:40.848828   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 03:43:40.849126   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:43:40.850135   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 03:43:40.850525   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 03:43:40.850584   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 03:43:40.850584   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:43:40.851434   13548 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 03:43:41.055143   13548 provision.go:177] copyRemoteCerts
	I0624 03:43:41.076689   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:43:41.076840   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:43.089540   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:43.101004   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:43.101374   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:45.558353   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:43:45.666513   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5898061s)
	I0624 03:43:45.666513   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 03:43:45.667084   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:43:45.707466   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 03:43:45.707879   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 03:43:45.754498   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 03:43:45.754928   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:43:45.799687   13548 provision.go:87] duration metric: took 13.9956771s to configureAuth
	I0624 03:43:45.799848   13548 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:43:45.800451   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:45.800585   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:47.883131   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:50.357456   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:50.358271   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:50.358271   13548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:43:50.502360   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:43:50.502593   13548 buildroot.go:70] root file system type: tmpfs
	I0624 03:43:50.502729   13548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:43:50.502897   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:55.130780   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:55.141664   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:55.147641   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:55.148202   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:55.148202   13548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:43:55.309296   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:43:55.309402   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:59.728173   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:59.740198   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:59.745881   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:59.746643   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:59.746643   13548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:43:59.916037   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:59.916037   13548 machine.go:97] duration metric: took 42.4144928s to provisionDockerMachine
	I0624 03:43:59.916037   13548 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 03:43:59.916037   13548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:43:59.931244   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:43:59.931244   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:02.064578   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:04.541369   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:04.553462   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:04.553462   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:04.670950   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7396873s)
	I0624 03:44:04.687524   13548 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:44:04.695060   13548 command_runner.go:130] > NAME=Buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 03:44:04.695060   13548 command_runner.go:130] > ID=buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 03:44:04.695060   13548 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 03:44:04.695291   13548 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:44:04.695356   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:44:04.695800   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:44:04.696898   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 03:44:04.696947   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 03:44:04.697665   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 03:44:04.697665   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> /etc/test/nested/copy/944/hosts
	I0624 03:44:04.709431   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 03:44:04.731457   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 03:44:04.778782   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 03:44:04.819557   13548 start.go:296] duration metric: took 4.9035006s for postStartSetup
	I0624 03:44:04.819557   13548 fix.go:56] duration metric: took 50.0555458s for fixHost
	I0624 03:44:04.819557   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:06.873368   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:09.360619   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:09.371886   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:09.377894   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:09.378165   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:09.378165   13548 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0624 03:44:09.515487   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225849.523515286
	
	I0624 03:44:09.515487   13548 fix.go:216] guest clock: 1719225849.523515286
	I0624 03:44:09.515487   13548 fix.go:229] Guest: 2024-06-24 03:44:09.523515286 -0700 PDT Remote: 2024-06-24 03:44:04.8195572 -0700 PDT m=+55.460499301 (delta=4.703958086s)
	I0624 03:44:09.516024   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:11.588037   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:11.588439   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:11.588554   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:14.105325   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:14.116712   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:14.122742   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:14.123327   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:14.123327   13548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719225849
	I0624 03:44:14.257404   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:44:09 UTC 2024
	
	I0624 03:44:14.266480   13548 fix.go:236] clock set: Mon Jun 24 10:44:09 UTC 2024
	 (err=<nil>)
	I0624 03:44:14.266480   13548 start.go:83] releasing machines lock for "functional-094900", held for 59.5025727s
	I0624 03:44:14.266717   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:18.792708   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:18.792794   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:18.798211   13548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:44:18.798211   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:18.807964   13548 ssh_runner.go:195] Run: cat /version.json
	I0624 03:44:18.807964   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.594622   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.625284   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.627240   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.627537   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.748668   13548 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9504365s)
	I0624 03:44:23.748668   13548 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: cat /version.json: (4.9406839s)
	I0624 03:44:23.763177   13548 ssh_runner.go:195] Run: systemctl --version
	I0624 03:44:23.771997   13548 command_runner.go:130] > systemd 252 (252)
	I0624 03:44:23.772132   13548 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 03:44:23.784221   13548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 03:44:23.786750   13548 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 03:44:23.792724   13548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:44:23.806462   13548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:44:23.814714   13548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 03:44:23.814714   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:23.814714   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:23.855882   13548 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 03:44:23.869045   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:44:23.901633   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:44:23.920843   13548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:44:23.932273   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:44:23.966386   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:23.995112   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:44:24.024914   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:24.057915   13548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:44:24.090275   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:44:24.122390   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:44:24.150224   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:44:24.182847   13548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:44:24.198901   13548 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 03:44:24.210083   13548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:44:24.236503   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:24.467803   13548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:44:24.506745   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:24.518868   13548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:44:24.544974   13548 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 03:44:24.545035   13548 command_runner.go:130] > [Unit]
	I0624 03:44:24.545035   13548 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 03:44:24.545114   13548 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 03:44:24.545114   13548 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 03:44:24.545114   13548 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitBurst=3
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 03:44:24.545114   13548 command_runner.go:130] > [Service]
	I0624 03:44:24.545114   13548 command_runner.go:130] > Type=notify
	I0624 03:44:24.545175   13548 command_runner.go:130] > Restart=on-failure
	I0624 03:44:24.545175   13548 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 03:44:24.545258   13548 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 03:44:24.545258   13548 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 03:44:24.545258   13548 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 03:44:24.545258   13548 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 03:44:24.545356   13548 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 03:44:24.545356   13548 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 03:44:24.545356   13548 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 03:44:24.545356   13548 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 03:44:24.545484   13548 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 03:44:24.545542   13548 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNOFILE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNPROC=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitCORE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 03:44:24.545606   13548 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 03:44:24.545606   13548 command_runner.go:130] > TasksMax=infinity
	I0624 03:44:24.545606   13548 command_runner.go:130] > TimeoutStartSec=0
	I0624 03:44:24.545606   13548 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 03:44:24.545606   13548 command_runner.go:130] > Delegate=yes
	I0624 03:44:24.545665   13548 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 03:44:24.545665   13548 command_runner.go:130] > KillMode=process
	I0624 03:44:24.545665   13548 command_runner.go:130] > [Install]
	I0624 03:44:24.545665   13548 command_runner.go:130] > WantedBy=multi-user.target
	I0624 03:44:24.559163   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.591098   13548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:44:24.636389   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.676014   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:44:24.696137   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:24.732552   13548 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 03:44:24.747391   13548 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:44:24.754399   13548 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 03:44:24.766719   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:44:24.791004   13548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:44:24.838660   13548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:44:25.097098   13548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:44:25.321701   13548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:44:25.322016   13548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:44:25.365482   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:25.595720   13548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:45:36.971718   13548 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0624 03:45:36.971718   13548 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0624 03:45:36.971718   13548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3757195s)
	I0624 03:45:36.985018   13548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 03:45:37.023710   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	I0624 03:45:37.023910   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.024033   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024258   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024514   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.024571   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.024987   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025327   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025586   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.026121   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.026282   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.026340   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.026458   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	I0624 03:45:37.026484   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.026556   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	I0624 03:45:37.026618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.026739   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.026775   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026868   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.026928   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026967   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027096   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027209   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.027334   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.027434   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.027481   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027719   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027916   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027958   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028194   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028456   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.028515   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028607   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.028638   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.028685   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.028741   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029652   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030108   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030155   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.031171   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.031200   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.031284   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032159   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032998   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033049   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034309   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034524   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034577   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034629   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034678   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034800   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	I0624 03:45:37.063128   13548 out.go:177] 
	W0624 03:45:37.064618   13548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 03:45:37.066766   13548 out.go:239] * 
	* 
	W0624 03:45:37.068602   13548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:37.072455   13548 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-094900 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m28.3297441s for "functional-094900" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.931946s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:45:37.791206   11792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (1m48.2658486s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-517800 ip                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:32 PDT |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-517800                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	| addons  | enable dashboard -p                                                   | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:33 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-517800                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:33 PDT |
	| start   | -p nospam-998200 -n=1 --memory=2250 --wait=false                      | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:36 PDT |
	|         | --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                                      | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:43:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:43:09.447962   13548 out.go:291] Setting OutFile to fd 664 ...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.448740   13548 out.go:304] Setting ErrFile to fd 1000...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.468761   13548 out.go:298] Setting JSON to false
	I0624 03:43:09.477501   13548 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16244,"bootTime":1719209544,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:43:09.479473   13548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:09.486094   13548 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:43:09.494437   13548 notify.go:220] Checking for updates...
	I0624 03:43:09.497049   13548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:43:09.499404   13548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:09.501645   13548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:43:09.506090   13548 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:09.508471   13548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:09.512353   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:09.512353   13548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:14.693063   13548 out.go:177] * Using the hyperv driver based on existing profile
	I0624 03:43:14.698333   13548 start.go:297] selected driver: hyperv
	I0624 03:43:14.698333   13548 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.698672   13548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:14.748082   13548 cni.go:84] Creating CNI manager for ""
	I0624 03:43:14.748082   13548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:14.748576   13548 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.749343   13548 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:14.755811   13548 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 03:43:14.758579   13548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:14.758579   13548 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:43:14.758579   13548 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:14.758579   13548 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:43:14.758579   13548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:14.758579   13548 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 03:43:14.762017   13548 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:14.762017   13548 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 03:43:14.763813   13548 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:14.763813   13548 fix.go:54] fixHost starting: 
	I0624 03:43:14.764063   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:17.494798   13548 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 03:43:17.494798   13548 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:17.498717   13548 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 03:43:17.501376   13548 machine.go:94] provisionDockerMachine start ...
	I0624 03:43:17.501582   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:19.660834   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:22.147106   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:22.157990   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:22.163861   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:22.164603   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:22.164603   13548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:43:22.312573   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:22.312648   13548 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 03:43:22.312754   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:24.365878   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:26.848844   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:26.860297   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:26.866464   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:26.867078   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:26.867078   13548 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 03:43:27.028071   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:27.028071   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:29.110895   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:31.664830   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:31.665356   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:31.665356   13548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:43:31.803954   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:31.803954   13548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:43:31.803954   13548 buildroot.go:174] setting up certificates
	I0624 03:43:31.803954   13548 provision.go:84] configureAuth start
	I0624 03:43:31.803954   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:33.909848   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:33.911457   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:33.911566   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:36.371938   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:38.422770   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:40.845838   13548 provision.go:143] copyHostCerts
	I0624 03:43:40.846031   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 03:43:40.846302   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 03:43:40.846398   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 03:43:40.846882   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:43:40.848489   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 03:43:40.848828   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 03:43:40.848828   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 03:43:40.849126   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:43:40.850135   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 03:43:40.850525   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 03:43:40.850584   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 03:43:40.850584   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:43:40.851434   13548 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 03:43:41.055143   13548 provision.go:177] copyRemoteCerts
	I0624 03:43:41.076689   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:43:41.076840   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:43.089540   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:43.101004   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:43.101374   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:45.558353   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:43:45.666513   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5898061s)
	I0624 03:43:45.666513   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 03:43:45.667084   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:43:45.707466   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 03:43:45.707879   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 03:43:45.754498   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 03:43:45.754928   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:43:45.799687   13548 provision.go:87] duration metric: took 13.9956771s to configureAuth
	I0624 03:43:45.799848   13548 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:43:45.800451   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:45.800585   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:47.883131   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:50.357456   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:50.358271   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:50.358271   13548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:43:50.502360   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:43:50.502593   13548 buildroot.go:70] root file system type: tmpfs
	I0624 03:43:50.502729   13548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:43:50.502897   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:55.130780   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:55.141664   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:55.147641   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:55.148202   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:55.148202   13548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:43:55.309296   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:43:55.309402   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:59.728173   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:59.740198   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:59.745881   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:59.746643   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:59.746643   13548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:43:59.916037   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:59.916037   13548 machine.go:97] duration metric: took 42.4144928s to provisionDockerMachine
	I0624 03:43:59.916037   13548 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 03:43:59.916037   13548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:43:59.931244   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:43:59.931244   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:02.064578   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:04.541369   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:04.553462   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:04.553462   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:04.670950   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7396873s)
	I0624 03:44:04.687524   13548 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:44:04.695060   13548 command_runner.go:130] > NAME=Buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 03:44:04.695060   13548 command_runner.go:130] > ID=buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 03:44:04.695060   13548 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 03:44:04.695291   13548 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:44:04.695356   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:44:04.695800   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:44:04.696898   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 03:44:04.696947   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 03:44:04.697665   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 03:44:04.697665   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> /etc/test/nested/copy/944/hosts
	I0624 03:44:04.709431   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 03:44:04.731457   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 03:44:04.778782   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 03:44:04.819557   13548 start.go:296] duration metric: took 4.9035006s for postStartSetup
	I0624 03:44:04.819557   13548 fix.go:56] duration metric: took 50.0555458s for fixHost
	I0624 03:44:04.819557   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:06.873368   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:09.360619   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:09.371886   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:09.377894   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:09.378165   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:09.378165   13548 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:44:09.515487   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225849.523515286
	
	I0624 03:44:09.515487   13548 fix.go:216] guest clock: 1719225849.523515286
	I0624 03:44:09.515487   13548 fix.go:229] Guest: 2024-06-24 03:44:09.523515286 -0700 PDT Remote: 2024-06-24 03:44:04.8195572 -0700 PDT m=+55.460499301 (delta=4.703958086s)
	I0624 03:44:09.516024   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:11.588037   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:11.588439   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:11.588554   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:14.105325   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:14.116712   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:14.122742   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:14.123327   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:14.123327   13548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719225849
	I0624 03:44:14.257404   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:44:09 UTC 2024
	
	I0624 03:44:14.266480   13548 fix.go:236] clock set: Mon Jun 24 10:44:09 UTC 2024
	 (err=<nil>)
	I0624 03:44:14.266480   13548 start.go:83] releasing machines lock for "functional-094900", held for 59.5025727s
	I0624 03:44:14.266717   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:18.792708   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:18.792794   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:18.798211   13548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:44:18.798211   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:18.807964   13548 ssh_runner.go:195] Run: cat /version.json
	I0624 03:44:18.807964   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.594622   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.625284   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.627240   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.627537   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.748668   13548 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9504365s)
	I0624 03:44:23.748668   13548 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: cat /version.json: (4.9406839s)
	I0624 03:44:23.763177   13548 ssh_runner.go:195] Run: systemctl --version
	I0624 03:44:23.771997   13548 command_runner.go:130] > systemd 252 (252)
	I0624 03:44:23.772132   13548 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 03:44:23.784221   13548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 03:44:23.786750   13548 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 03:44:23.792724   13548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:44:23.806462   13548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:44:23.814714   13548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 03:44:23.814714   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:23.814714   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:23.855882   13548 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 03:44:23.869045   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:44:23.901633   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:44:23.920843   13548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:44:23.932273   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:44:23.966386   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:23.995112   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:44:24.024914   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:24.057915   13548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:44:24.090275   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:44:24.122390   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:44:24.150224   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:44:24.182847   13548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:44:24.198901   13548 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 03:44:24.210083   13548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:44:24.236503   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:24.467803   13548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:44:24.506745   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:24.518868   13548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:44:24.544974   13548 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 03:44:24.545035   13548 command_runner.go:130] > [Unit]
	I0624 03:44:24.545035   13548 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 03:44:24.545114   13548 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 03:44:24.545114   13548 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 03:44:24.545114   13548 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitBurst=3
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 03:44:24.545114   13548 command_runner.go:130] > [Service]
	I0624 03:44:24.545114   13548 command_runner.go:130] > Type=notify
	I0624 03:44:24.545175   13548 command_runner.go:130] > Restart=on-failure
	I0624 03:44:24.545175   13548 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 03:44:24.545258   13548 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 03:44:24.545258   13548 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 03:44:24.545258   13548 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 03:44:24.545258   13548 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 03:44:24.545356   13548 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 03:44:24.545356   13548 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 03:44:24.545356   13548 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 03:44:24.545356   13548 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 03:44:24.545484   13548 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 03:44:24.545542   13548 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNOFILE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNPROC=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitCORE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 03:44:24.545606   13548 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 03:44:24.545606   13548 command_runner.go:130] > TasksMax=infinity
	I0624 03:44:24.545606   13548 command_runner.go:130] > TimeoutStartSec=0
	I0624 03:44:24.545606   13548 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 03:44:24.545606   13548 command_runner.go:130] > Delegate=yes
	I0624 03:44:24.545665   13548 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 03:44:24.545665   13548 command_runner.go:130] > KillMode=process
	I0624 03:44:24.545665   13548 command_runner.go:130] > [Install]
	I0624 03:44:24.545665   13548 command_runner.go:130] > WantedBy=multi-user.target
	I0624 03:44:24.559163   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.591098   13548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:44:24.636389   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.676014   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:44:24.696137   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:24.732552   13548 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 03:44:24.747391   13548 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:44:24.754399   13548 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 03:44:24.766719   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:44:24.791004   13548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:44:24.838660   13548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:44:25.097098   13548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:44:25.321701   13548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:44:25.322016   13548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:44:25.365482   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:25.595720   13548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:45:36.971718   13548 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0624 03:45:36.971718   13548 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0624 03:45:36.971718   13548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3757195s)
	I0624 03:45:36.985018   13548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 03:45:37.023710   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	I0624 03:45:37.023910   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.024033   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024258   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024514   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.024571   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.024987   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025327   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025586   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.026121   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.026282   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.026340   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.026458   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	I0624 03:45:37.026484   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.026556   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	I0624 03:45:37.026618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.026739   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.026775   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026868   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.026928   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026967   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027096   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027209   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.027334   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.027434   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.027481   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027719   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027916   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027958   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028194   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028456   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.028515   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028607   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.028638   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.028685   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.028741   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029652   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030108   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030155   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.031171   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.031200   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.031284   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032159   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032998   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033049   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034309   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034524   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034577   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034629   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034678   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034800   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	I0624 03:45:37.063128   13548 out.go:177] 
	W0624 03:45:37.064618   13548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 03:45:37.066766   13548 out.go:239] * 
	W0624 03:45:37.068602   13548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:37.072455   13548 out.go:177] 
	
	
	==> Docker <==
	Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
	Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:46:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:46:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T10:46:39Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.553719] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.185688] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.204617] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.735510] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 10:47:37 up 7 min,  0 users,  load average: 0.02, 0.17, 0.10
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 10:47:30 functional-094900 kubelet[2131]: I0624 10:47:30.988977    2131 status_manager.go:853] "Failed to get status for pod" podUID="19830515-9c0e-40b4-aa6e-9a097e95269b" pod="kube-system/coredns-7db6d8ff4d-59snf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-59snf\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:33 functional-094900 kubelet[2131]: E0624 10:47:33.314923    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.245867832s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 24 10:47:35 functional-094900 kubelet[2131]: E0624 10:47:35.959589    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.31.208.115:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-094900.17dbead1a739b411  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-094900,UID:3754f4d32128b99e7d404da851839fe9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-094900,},FirstTimestamp:2024-06-24 10:44:28.946617361 +0000 UTC m=+108.177830367,LastTimestamp:2024-06-24 10:44:28.946617361 +0000 UTC m=+108.1778303
67,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-094900,}"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.467725    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused" interval="7s"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.486763    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?resourceVersion=0&timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.487641    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.488583    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.489503    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.490359    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:47:36 functional-094900 kubelet[2131]: E0624 10:47:36.490444    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.610893    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.610985    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612508    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612543    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612730    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612754    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612773    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612815    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612971    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.612991    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: I0624 10:47:37.613001    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.613634    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.613661    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.613833    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 24 10:47:37 functional-094900 kubelet[2131]: E0624 10:47:37.611004    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:45:49.709786    9172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 03:46:37.251555    9172 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.288087    9172 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.318249    9172 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.355696    9172 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.384418    9172 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.422880    9172 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.449640    9172 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:46:37.479956    9172 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (11.6829365s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:47:38.500069   13952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (280.73s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (180.74s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-094900 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-094900 get po -A: exit status 1 (10.3281083s)

                                                
                                                
** stderr ** 
	E0624 03:47:52.364207    6924 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 03:47:54.393019    6924 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 03:47:56.438320    6924 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 03:47:58.474381    6924 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 03:48:00.529837    6924 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-094900 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0624 03:47:52.364207    6924 memcache.go:265] couldn't get current server API group list: Get \"https://172.31.208.115:8441/api?timeout=32s\": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.\nE0624 03:47:54.393019    6924 memcache.go:265] couldn't get current server API group list: Get \"https://172.31.208.115:8441/api?timeout=32s\": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.\nE0624 03:47:56.438320    6924 memcache.go:265] couldn't get current server API group list: Get \"https://172.31.208.115:8441/api?timeout=32s\": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.\nE0624 03:47:58.474381    6924 memcache.go:265] couldn't get current server API group list: Get \"https://172.31.208.115:8441/api?timeout=32s\": dial tcp 172.31.208.115:8441
: connectex: No connection could be made because the target machine actively refused it.\nE0624 03:48:00.529837    6924 memcache.go:265] couldn't get current server API group list: Get \"https://172.31.208.115:8441/api?timeout=32s\": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-094900 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-094900 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.7834783s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:48:00.638370    1932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
E0624 03:48:21.850948     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (2m26.3051391s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-517800 ip                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                          | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:32 PDT |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-517800                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	| addons  | enable dashboard -p                                                   | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:33 PDT |
	|         | addons-517800                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-517800                                                      | addons-517800     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:33 PDT |
	| start   | -p nospam-998200 -n=1 --memory=2250 --wait=false                      | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:36 PDT |
	|         | --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                               | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                                      | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:43:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:43:09.447962   13548 out.go:291] Setting OutFile to fd 664 ...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.448740   13548 out.go:304] Setting ErrFile to fd 1000...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.468761   13548 out.go:298] Setting JSON to false
	I0624 03:43:09.477501   13548 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16244,"bootTime":1719209544,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:43:09.479473   13548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:09.486094   13548 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:43:09.494437   13548 notify.go:220] Checking for updates...
	I0624 03:43:09.497049   13548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:43:09.499404   13548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:09.501645   13548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:43:09.506090   13548 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:09.508471   13548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:09.512353   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:09.512353   13548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:14.693063   13548 out.go:177] * Using the hyperv driver based on existing profile
	I0624 03:43:14.698333   13548 start.go:297] selected driver: hyperv
	I0624 03:43:14.698333   13548 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.698672   13548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:14.748082   13548 cni.go:84] Creating CNI manager for ""
	I0624 03:43:14.748082   13548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:14.748576   13548 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.749343   13548 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:14.755811   13548 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 03:43:14.758579   13548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:14.758579   13548 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:43:14.758579   13548 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:14.758579   13548 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:43:14.758579   13548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:14.758579   13548 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 03:43:14.762017   13548 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:14.762017   13548 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 03:43:14.763813   13548 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:14.763813   13548 fix.go:54] fixHost starting: 
	I0624 03:43:14.764063   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:17.494798   13548 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 03:43:17.494798   13548 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:17.498717   13548 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 03:43:17.501376   13548 machine.go:94] provisionDockerMachine start ...
	I0624 03:43:17.501582   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:19.660834   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:22.147106   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:22.157990   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:22.163861   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:22.164603   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:22.164603   13548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:43:22.312573   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:22.312648   13548 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 03:43:22.312754   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:24.365878   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:26.848844   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:26.860297   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:26.866464   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:26.867078   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:26.867078   13548 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 03:43:27.028071   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:27.028071   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:29.110895   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:31.664830   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:31.665356   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:31.665356   13548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:43:31.803954   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:31.803954   13548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:43:31.803954   13548 buildroot.go:174] setting up certificates
	I0624 03:43:31.803954   13548 provision.go:84] configureAuth start
	I0624 03:43:31.803954   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:33.909848   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:33.911457   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:33.911566   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:36.371938   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:38.422770   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:40.845838   13548 provision.go:143] copyHostCerts
	I0624 03:43:40.846031   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 03:43:40.846302   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 03:43:40.846398   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 03:43:40.846882   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:43:40.848489   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 03:43:40.848828   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 03:43:40.848828   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 03:43:40.849126   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:43:40.850135   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 03:43:40.850525   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 03:43:40.850584   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 03:43:40.850584   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:43:40.851434   13548 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 03:43:41.055143   13548 provision.go:177] copyRemoteCerts
	I0624 03:43:41.076689   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:43:41.076840   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:43.089540   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:43.101004   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:43.101374   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:45.558353   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:43:45.666513   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5898061s)
	I0624 03:43:45.666513   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 03:43:45.667084   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:43:45.707466   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 03:43:45.707879   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 03:43:45.754498   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 03:43:45.754928   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:43:45.799687   13548 provision.go:87] duration metric: took 13.9956771s to configureAuth
	I0624 03:43:45.799848   13548 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:43:45.800451   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:45.800585   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:47.883131   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:50.357456   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:50.358271   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:50.358271   13548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:43:50.502360   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:43:50.502593   13548 buildroot.go:70] root file system type: tmpfs
	I0624 03:43:50.502729   13548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:43:50.502897   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:55.130780   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:55.141664   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:55.147641   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:55.148202   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:55.148202   13548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:43:55.309296   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:43:55.309402   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:59.728173   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:59.740198   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:59.745881   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:59.746643   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:59.746643   13548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:43:59.916037   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:59.916037   13548 machine.go:97] duration metric: took 42.4144928s to provisionDockerMachine
	I0624 03:43:59.916037   13548 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 03:43:59.916037   13548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:43:59.931244   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:43:59.931244   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:02.064578   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:04.541369   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:04.553462   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:04.553462   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:04.670950   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7396873s)
	I0624 03:44:04.687524   13548 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:44:04.695060   13548 command_runner.go:130] > NAME=Buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 03:44:04.695060   13548 command_runner.go:130] > ID=buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 03:44:04.695060   13548 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 03:44:04.695291   13548 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:44:04.695356   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:44:04.695800   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:44:04.696898   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 03:44:04.696947   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 03:44:04.697665   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 03:44:04.697665   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> /etc/test/nested/copy/944/hosts
	I0624 03:44:04.709431   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 03:44:04.731457   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 03:44:04.778782   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 03:44:04.819557   13548 start.go:296] duration metric: took 4.9035006s for postStartSetup
	I0624 03:44:04.819557   13548 fix.go:56] duration metric: took 50.0555458s for fixHost
	I0624 03:44:04.819557   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:06.873368   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:09.360619   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:09.371886   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:09.377894   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:09.378165   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:09.378165   13548 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:44:09.515487   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225849.523515286
	
	I0624 03:44:09.515487   13548 fix.go:216] guest clock: 1719225849.523515286
	I0624 03:44:09.515487   13548 fix.go:229] Guest: 2024-06-24 03:44:09.523515286 -0700 PDT Remote: 2024-06-24 03:44:04.8195572 -0700 PDT m=+55.460499301 (delta=4.703958086s)
	I0624 03:44:09.516024   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:11.588037   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:11.588439   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:11.588554   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:14.105325   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:14.116712   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:14.122742   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:14.123327   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:14.123327   13548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719225849
	I0624 03:44:14.257404   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:44:09 UTC 2024
	
	I0624 03:44:14.266480   13548 fix.go:236] clock set: Mon Jun 24 10:44:09 UTC 2024
	 (err=<nil>)
	I0624 03:44:14.266480   13548 start.go:83] releasing machines lock for "functional-094900", held for 59.5025727s
	I0624 03:44:14.266717   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:18.792708   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:18.792794   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:18.798211   13548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:44:18.798211   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:18.807964   13548 ssh_runner.go:195] Run: cat /version.json
	I0624 03:44:18.807964   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.594622   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.625284   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.627240   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.627537   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.748668   13548 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9504365s)
	I0624 03:44:23.748668   13548 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: cat /version.json: (4.9406839s)
	I0624 03:44:23.763177   13548 ssh_runner.go:195] Run: systemctl --version
	I0624 03:44:23.771997   13548 command_runner.go:130] > systemd 252 (252)
	I0624 03:44:23.772132   13548 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 03:44:23.784221   13548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 03:44:23.786750   13548 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 03:44:23.792724   13548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:44:23.806462   13548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:44:23.814714   13548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 03:44:23.814714   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:23.814714   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:23.855882   13548 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 03:44:23.869045   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:44:23.901633   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:44:23.920843   13548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:44:23.932273   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:44:23.966386   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:23.995112   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:44:24.024914   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:24.057915   13548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:44:24.090275   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:44:24.122390   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:44:24.150224   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:44:24.182847   13548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:44:24.198901   13548 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 03:44:24.210083   13548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:44:24.236503   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:24.467803   13548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:44:24.506745   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:24.518868   13548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:44:24.544974   13548 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 03:44:24.545035   13548 command_runner.go:130] > [Unit]
	I0624 03:44:24.545035   13548 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 03:44:24.545114   13548 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 03:44:24.545114   13548 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 03:44:24.545114   13548 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitBurst=3
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 03:44:24.545114   13548 command_runner.go:130] > [Service]
	I0624 03:44:24.545114   13548 command_runner.go:130] > Type=notify
	I0624 03:44:24.545175   13548 command_runner.go:130] > Restart=on-failure
	I0624 03:44:24.545175   13548 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 03:44:24.545258   13548 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 03:44:24.545258   13548 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 03:44:24.545258   13548 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 03:44:24.545258   13548 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 03:44:24.545356   13548 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 03:44:24.545356   13548 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 03:44:24.545356   13548 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 03:44:24.545356   13548 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 03:44:24.545484   13548 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 03:44:24.545542   13548 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNOFILE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNPROC=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitCORE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 03:44:24.545606   13548 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 03:44:24.545606   13548 command_runner.go:130] > TasksMax=infinity
	I0624 03:44:24.545606   13548 command_runner.go:130] > TimeoutStartSec=0
	I0624 03:44:24.545606   13548 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 03:44:24.545606   13548 command_runner.go:130] > Delegate=yes
	I0624 03:44:24.545665   13548 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 03:44:24.545665   13548 command_runner.go:130] > KillMode=process
	I0624 03:44:24.545665   13548 command_runner.go:130] > [Install]
	I0624 03:44:24.545665   13548 command_runner.go:130] > WantedBy=multi-user.target
	I0624 03:44:24.559163   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.591098   13548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:44:24.636389   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.676014   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:44:24.696137   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:24.732552   13548 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 03:44:24.747391   13548 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:44:24.754399   13548 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 03:44:24.766719   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:44:24.791004   13548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:44:24.838660   13548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:44:25.097098   13548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:44:25.321701   13548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:44:25.322016   13548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:44:25.365482   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:25.595720   13548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:45:36.971718   13548 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0624 03:45:36.971718   13548 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0624 03:45:36.971718   13548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3757195s)
	I0624 03:45:36.985018   13548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 03:45:37.023710   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	I0624 03:45:37.023910   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.024033   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024258   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024514   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.024571   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.024987   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025327   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025586   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.026121   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.026282   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.026340   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.026458   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	I0624 03:45:37.026484   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.026556   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	I0624 03:45:37.026618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.026739   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.026775   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026868   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.026928   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026967   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027096   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027209   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.027334   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.027434   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.027481   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027719   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027916   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027958   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028194   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028456   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.028515   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028607   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.028638   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.028685   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.028741   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029652   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030108   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030155   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.031171   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.031200   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.031284   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032159   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032998   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033049   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034309   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034524   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034577   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034629   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034678   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034800   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	I0624 03:45:37.063128   13548 out.go:177] 
	W0624 03:45:37.064618   13548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 03:45:37.066766   13548 out.go:239] * 
	W0624 03:45:37.068602   13548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:37.072455   13548 out.go:177] 
	
	
	==> Docker <==
	Jun 24 10:48:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:48:37Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:48:37 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:48:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
	Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:49:38 functional-094900 cri-dockerd[1233]: time="2024-06-24T10:49:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T10:49:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.553719] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.185688] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.204617] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.735510] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 10:50:38 up 10 min,  0 users,  load average: 0.00, 0.09, 0.08
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 10:50:29 functional-094900 kubelet[2131]: E0624 10:50:29.458784    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:50:29 functional-094900 kubelet[2131]: E0624 10:50:29.459749    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:50:29 functional-094900 kubelet[2131]: E0624 10:50:29.459846    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 24 10:50:30 functional-094900 kubelet[2131]: I0624 10:50:30.988380    2131 status_manager.go:853] "Failed to get status for pod" podUID="19830515-9c0e-40b4-aa6e-9a097e95269b" pod="kube-system/coredns-7db6d8ff4d-59snf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-59snf\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:50:30 functional-094900 kubelet[2131]: I0624 10:50:30.989584    2131 status_manager.go:853] "Failed to get status for pod" podUID="d03b818b7b4fa1752186956a1ebf4539" pod="kube-system/kube-apiserver-functional-094900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-094900\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 10:50:31 functional-094900 kubelet[2131]: E0624 10:50:31.528996    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused" interval="7s"
	Jun 24 10:50:33 functional-094900 kubelet[2131]: E0624 10:50:33.346165    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m8.277111683s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 24 10:50:33 functional-094900 kubelet[2131]: E0624 10:50:33.623297    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.31.208.115:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-functional-094900.17dbead26d8ceaa6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-functional-094900,UID:b757e454a3a9ce4fcef3999bfc1cd742,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-094900,},FirstTimestamp:2024-06-24 10:44:32.27395959 +0000 UTC m=+111.505172596,LastTimestamp:2024-06-24 10:44:32.27395959
+0000 UTC m=+111.505172596,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-094900,}"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.346579    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.277531439s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.398674    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.398791    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.398924    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.404211    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.404453    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.405022    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.405303    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.405403    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.405626    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: I0624 10:50:38.405787    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.407558    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.408255    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.408281    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.408399    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.408459    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 24 10:50:38 functional-094900 kubelet[2131]: E0624 10:50:38.532467    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:48:12.425318    4512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 03:48:37.894827    4512 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:48:37.931124    4512 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:48:37.966041    4512 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:48:37.998922    4512 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:48:38.031929    4512 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:49:38.167833    4512 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:49:38.204283    4512 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 03:49:38.237665    4512 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (11.8315373s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:50:39.221158    2340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (180.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl images: exit status 1 (11.1350153s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:57:41.381898   13140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:57:41.381898   13140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 ssh sudo docker rmi registry.k8s.io/pause:latest
E0624 03:58:21.850036     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (48.2753745s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:57:52.517062    7620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-094900 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.1499405s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:58:40.793366    7792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 cache reload: (1m49.3952235s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (10.9096302s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:00:41.341501    3544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-094900 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (181.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 kubectl -- --context functional-094900 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 kubectl -- --context functional-094900 get pods: exit status 1 (10.4876998s)

                                                
                                                
** stderr ** 
	W0624 04:03:54.523587   11308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:03:56.797919   13664 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:03:58.811388   13664 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:04:00.825068   13664 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:04:02.857578   13664 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:04:04.884809   13664 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-094900 kubectl -- --context functional-094900 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.5408865s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:04:05.003363   14212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (2m26.4013915s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                            | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                 |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache delete                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	| ssh     | functional-094900 ssh sudo                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-094900                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-094900 ssh                                       | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache reload                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
	| ssh     | functional-094900 ssh                                       | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:43:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:43:09.447962   13548 out.go:291] Setting OutFile to fd 664 ...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.448740   13548 out.go:304] Setting ErrFile to fd 1000...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.468761   13548 out.go:298] Setting JSON to false
	I0624 03:43:09.477501   13548 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16244,"bootTime":1719209544,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:43:09.479473   13548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:09.486094   13548 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:43:09.494437   13548 notify.go:220] Checking for updates...
	I0624 03:43:09.497049   13548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:43:09.499404   13548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:09.501645   13548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:43:09.506090   13548 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:09.508471   13548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:09.512353   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:09.512353   13548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:14.693063   13548 out.go:177] * Using the hyperv driver based on existing profile
	I0624 03:43:14.698333   13548 start.go:297] selected driver: hyperv
	I0624 03:43:14.698333   13548 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.698672   13548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:14.748082   13548 cni.go:84] Creating CNI manager for ""
	I0624 03:43:14.748082   13548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:14.748576   13548 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.749343   13548 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:14.755811   13548 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 03:43:14.758579   13548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:14.758579   13548 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:43:14.758579   13548 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:14.758579   13548 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:43:14.758579   13548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:14.758579   13548 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 03:43:14.762017   13548 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:14.762017   13548 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 03:43:14.763813   13548 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:14.763813   13548 fix.go:54] fixHost starting: 
	I0624 03:43:14.764063   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:17.494798   13548 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 03:43:17.494798   13548 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:17.498717   13548 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 03:43:17.501376   13548 machine.go:94] provisionDockerMachine start ...
	I0624 03:43:17.501582   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:19.660834   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:22.147106   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:22.157990   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:22.163861   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:22.164603   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:22.164603   13548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:43:22.312573   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:22.312648   13548 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 03:43:22.312754   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:24.365878   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:26.848844   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:26.860297   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:26.866464   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:26.867078   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:26.867078   13548 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 03:43:27.028071   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:27.028071   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:29.110895   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:31.664830   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:31.665356   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:31.665356   13548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:43:31.803954   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:31.803954   13548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:43:31.803954   13548 buildroot.go:174] setting up certificates
	I0624 03:43:31.803954   13548 provision.go:84] configureAuth start
	I0624 03:43:31.803954   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:33.909848   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:33.911457   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:33.911566   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:36.371938   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:38.422770   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:40.845838   13548 provision.go:143] copyHostCerts
	I0624 03:43:40.846031   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 03:43:40.846302   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 03:43:40.846398   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 03:43:40.846882   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:43:40.848489   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 03:43:40.848828   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 03:43:40.848828   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 03:43:40.849126   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:43:40.850135   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 03:43:40.850525   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 03:43:40.850584   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 03:43:40.850584   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:43:40.851434   13548 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 03:43:41.055143   13548 provision.go:177] copyRemoteCerts
	I0624 03:43:41.076689   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:43:41.076840   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:43.089540   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:43.101004   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:43.101374   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:45.558353   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:43:45.666513   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5898061s)
	I0624 03:43:45.666513   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 03:43:45.667084   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:43:45.707466   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 03:43:45.707879   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 03:43:45.754498   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 03:43:45.754928   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:43:45.799687   13548 provision.go:87] duration metric: took 13.9956771s to configureAuth
	I0624 03:43:45.799848   13548 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:43:45.800451   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:45.800585   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:47.883131   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:50.357456   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:50.358271   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:50.358271   13548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:43:50.502360   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:43:50.502593   13548 buildroot.go:70] root file system type: tmpfs
	I0624 03:43:50.502729   13548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:43:50.502897   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:55.130780   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:55.141664   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:55.147641   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:55.148202   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:55.148202   13548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:43:55.309296   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:43:55.309402   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:59.728173   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:59.740198   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:59.745881   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:59.746643   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:59.746643   13548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:43:59.916037   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:59.916037   13548 machine.go:97] duration metric: took 42.4144928s to provisionDockerMachine
	I0624 03:43:59.916037   13548 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 03:43:59.916037   13548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:43:59.931244   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:43:59.931244   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:02.064578   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:04.541369   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:04.553462   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:04.553462   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:04.670950   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7396873s)
	I0624 03:44:04.687524   13548 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:44:04.695060   13548 command_runner.go:130] > NAME=Buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 03:44:04.695060   13548 command_runner.go:130] > ID=buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 03:44:04.695060   13548 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 03:44:04.695291   13548 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:44:04.695356   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:44:04.695800   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:44:04.696898   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 03:44:04.696947   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 03:44:04.697665   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 03:44:04.697665   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> /etc/test/nested/copy/944/hosts
	I0624 03:44:04.709431   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 03:44:04.731457   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 03:44:04.778782   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 03:44:04.819557   13548 start.go:296] duration metric: took 4.9035006s for postStartSetup
	I0624 03:44:04.819557   13548 fix.go:56] duration metric: took 50.0555458s for fixHost
	I0624 03:44:04.819557   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:06.873368   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:09.360619   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:09.371886   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:09.377894   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:09.378165   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:09.378165   13548 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:44:09.515487   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225849.523515286
	
	I0624 03:44:09.515487   13548 fix.go:216] guest clock: 1719225849.523515286
	I0624 03:44:09.515487   13548 fix.go:229] Guest: 2024-06-24 03:44:09.523515286 -0700 PDT Remote: 2024-06-24 03:44:04.8195572 -0700 PDT m=+55.460499301 (delta=4.703958086s)
	I0624 03:44:09.516024   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:11.588037   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:11.588439   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:11.588554   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:14.105325   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:14.116712   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:14.122742   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:14.123327   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:14.123327   13548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719225849
	I0624 03:44:14.257404   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:44:09 UTC 2024
	
	I0624 03:44:14.266480   13548 fix.go:236] clock set: Mon Jun 24 10:44:09 UTC 2024
	 (err=<nil>)
	I0624 03:44:14.266480   13548 start.go:83] releasing machines lock for "functional-094900", held for 59.5025727s
	I0624 03:44:14.266717   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:18.792708   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:18.792794   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:18.798211   13548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:44:18.798211   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:18.807964   13548 ssh_runner.go:195] Run: cat /version.json
	I0624 03:44:18.807964   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.594622   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.625284   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.627240   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.627537   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.748668   13548 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9504365s)
	I0624 03:44:23.748668   13548 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: cat /version.json: (4.9406839s)
	I0624 03:44:23.763177   13548 ssh_runner.go:195] Run: systemctl --version
	I0624 03:44:23.771997   13548 command_runner.go:130] > systemd 252 (252)
	I0624 03:44:23.772132   13548 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 03:44:23.784221   13548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 03:44:23.786750   13548 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 03:44:23.792724   13548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:44:23.806462   13548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:44:23.814714   13548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 03:44:23.814714   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:23.814714   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:23.855882   13548 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 03:44:23.869045   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:44:23.901633   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:44:23.920843   13548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:44:23.932273   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:44:23.966386   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:23.995112   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:44:24.024914   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:24.057915   13548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:44:24.090275   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:44:24.122390   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:44:24.150224   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:44:24.182847   13548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:44:24.198901   13548 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 03:44:24.210083   13548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:44:24.236503   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:24.467803   13548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:44:24.506745   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:24.518868   13548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:44:24.544974   13548 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 03:44:24.545035   13548 command_runner.go:130] > [Unit]
	I0624 03:44:24.545035   13548 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 03:44:24.545114   13548 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 03:44:24.545114   13548 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 03:44:24.545114   13548 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitBurst=3
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 03:44:24.545114   13548 command_runner.go:130] > [Service]
	I0624 03:44:24.545114   13548 command_runner.go:130] > Type=notify
	I0624 03:44:24.545175   13548 command_runner.go:130] > Restart=on-failure
	I0624 03:44:24.545175   13548 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 03:44:24.545258   13548 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 03:44:24.545258   13548 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 03:44:24.545258   13548 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 03:44:24.545258   13548 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 03:44:24.545356   13548 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 03:44:24.545356   13548 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 03:44:24.545356   13548 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 03:44:24.545356   13548 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 03:44:24.545484   13548 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 03:44:24.545542   13548 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNOFILE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNPROC=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitCORE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 03:44:24.545606   13548 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 03:44:24.545606   13548 command_runner.go:130] > TasksMax=infinity
	I0624 03:44:24.545606   13548 command_runner.go:130] > TimeoutStartSec=0
	I0624 03:44:24.545606   13548 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 03:44:24.545606   13548 command_runner.go:130] > Delegate=yes
	I0624 03:44:24.545665   13548 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 03:44:24.545665   13548 command_runner.go:130] > KillMode=process
	I0624 03:44:24.545665   13548 command_runner.go:130] > [Install]
	I0624 03:44:24.545665   13548 command_runner.go:130] > WantedBy=multi-user.target
	I0624 03:44:24.559163   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.591098   13548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:44:24.636389   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.676014   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:44:24.696137   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:24.732552   13548 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 03:44:24.747391   13548 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:44:24.754399   13548 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 03:44:24.766719   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:44:24.791004   13548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:44:24.838660   13548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:44:25.097098   13548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:44:25.321701   13548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:44:25.322016   13548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:44:25.365482   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:25.595720   13548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:45:36.971718   13548 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0624 03:45:36.971718   13548 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0624 03:45:36.971718   13548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3757195s)
	I0624 03:45:36.985018   13548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 03:45:37.023710   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	I0624 03:45:37.023910   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.024033   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024258   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024514   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.024571   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.024987   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025327   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025586   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.026121   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.026282   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.026340   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.026458   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	I0624 03:45:37.026484   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.026556   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	I0624 03:45:37.026618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.026739   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.026775   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026868   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.026928   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026967   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027096   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027209   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.027334   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.027434   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.027481   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027719   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027916   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027958   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028194   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028456   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.028515   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028607   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.028638   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.028685   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.028741   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029652   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030108   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030155   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.031171   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.031200   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.031284   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032159   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032998   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033049   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034309   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034524   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034577   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034629   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034678   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034800   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	I0624 03:45:37.063128   13548 out.go:177] 
	W0624 03:45:37.064618   13548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 03:45:37.066766   13548 out.go:239] * 
	W0624 03:45:37.068602   13548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:37.072455   13548 out.go:177] 
	
	
	==> Docker <==
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
	Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:05:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:05:42Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T11:05:42Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.553719] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.185688] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.204617] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.735510] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:06:42 up 26 min,  0 users,  load average: 0.06, 0.04, 0.01
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 11:06:39 functional-094900 kubelet[2131]: E0624 11:06:39.513940    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 24 11:06:40 functional-094900 kubelet[2131]: E0624 11:06:40.363575    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-094900.17dbead33f053529\": dial tcp 172.31.208.115:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-094900.17dbead33f053529  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-094900,UID:d03b818b7b4fa1752186956a1ebf4539,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.31.208.115:8441/readyz\": dial tcp 172.31.208.115:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-094900,},FirstTimestamp:2024-06-24 10:44:35.788281129 +0000 UTC m=+115.019494135,LastTimes
tamp:2024-06-24 10:44:38.169508857 +0000 UTC m=+117.400721863,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-094900,}"
	Jun 24 11:06:40 functional-094900 kubelet[2131]: I0624 11:06:40.987708    2131 status_manager.go:853] "Failed to get status for pod" podUID="d03b818b7b4fa1752186956a1ebf4539" pod="kube-system/kube-apiserver-functional-094900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-094900\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:06:40 functional-094900 kubelet[2131]: I0624 11:06:40.988855    2131 status_manager.go:853] "Failed to get status for pod" podUID="19830515-9c0e-40b4-aa6e-9a097e95269b" pod="kube-system/coredns-7db6d8ff4d-59snf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-59snf\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:06:41 functional-094900 kubelet[2131]: E0624 11:06:41.063266    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:06:41 functional-094900 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:06:41 functional-094900 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:06:41 functional-094900 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:06:41 functional-094900 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613272    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613350    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613460    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613531    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613550    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613610    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613648    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: I0624 11:06:42.613661    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613802    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.613874    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.614038    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.614168    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.614252    2131 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.615418    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.615487    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 11:06:42 functional-094900 kubelet[2131]: E0624 11:06:42.615699    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:04:16.561788    2448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:04:42.111999    2448 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.149402    2448 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.182521    2448 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.213308    2448 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.241322    2448 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.271628    2448 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:04:42.302094    2448 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:05:42.413484    2448 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (12.0988462s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:06:43.481067    9944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (181.04s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (180.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.8058057s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:06:55.564213    4368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
E0624 04:08:21.849644     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (2m36.366913s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                     | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                            | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                 | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                 |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache delete                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	| ssh     | functional-094900 ssh sudo                                  | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-094900                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-094900 ssh                                       | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache reload                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
	| ssh     | functional-094900 ssh                                       | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:43:09
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:43:09.447962   13548 out.go:291] Setting OutFile to fd 664 ...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.448740   13548 out.go:304] Setting ErrFile to fd 1000...
	I0624 03:43:09.448740   13548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:43:09.468761   13548 out.go:298] Setting JSON to false
	I0624 03:43:09.477501   13548 start.go:129] hostinfo: {"hostname":"minikube1","uptime":16244,"bootTime":1719209544,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:43:09.479473   13548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:43:09.486094   13548 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:43:09.494437   13548 notify.go:220] Checking for updates...
	I0624 03:43:09.497049   13548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:43:09.499404   13548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 03:43:09.501645   13548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:43:09.506090   13548 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 03:43:09.508471   13548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 03:43:09.512353   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:09.512353   13548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:43:14.693063   13548 out.go:177] * Using the hyperv driver based on existing profile
	I0624 03:43:14.698333   13548 start.go:297] selected driver: hyperv
	I0624 03:43:14.698333   13548 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.698672   13548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 03:43:14.748082   13548 cni.go:84] Creating CNI manager for ""
	I0624 03:43:14.748082   13548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:43:14.748576   13548 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:43:14.749343   13548 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:43:14.755811   13548 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 03:43:14.758579   13548 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:43:14.758579   13548 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:43:14.758579   13548 cache.go:56] Caching tarball of preloaded images
	I0624 03:43:14.758579   13548 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 03:43:14.758579   13548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:43:14.758579   13548 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 03:43:14.762017   13548 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 03:43:14.762017   13548 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 03:43:14.763813   13548 start.go:96] Skipping create...Using existing machine configuration
	I0624 03:43:14.763813   13548 fix.go:54] fixHost starting: 
	I0624 03:43:14.764063   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:17.494709   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:17.494798   13548 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 03:43:17.494798   13548 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 03:43:17.498717   13548 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 03:43:17.501376   13548 machine.go:94] provisionDockerMachine start ...
	I0624 03:43:17.501582   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:19.660480   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:19.660834   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:22.147106   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:22.157990   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:22.163861   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:22.164603   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:22.164603   13548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 03:43:22.312573   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:22.312648   13548 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 03:43:22.312754   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:24.365878   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:24.366066   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:26.848844   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:26.860297   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:26.866464   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:26.867078   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:26.867078   13548 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 03:43:27.028071   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 03:43:27.028071   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:29.110401   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:29.110895   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:31.659445   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:31.664830   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:31.665356   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:31.665356   13548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 03:43:31.803954   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:31.803954   13548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 03:43:31.803954   13548 buildroot.go:174] setting up certificates
	I0624 03:43:31.803954   13548 provision.go:84] configureAuth start
	I0624 03:43:31.803954   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:33.909848   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:33.911457   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:33.911566   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:36.371938   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:36.385444   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:38.422619   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:38.422770   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:40.834377   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:40.845838   13548 provision.go:143] copyHostCerts
	I0624 03:43:40.846031   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 03:43:40.846302   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 03:43:40.846398   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 03:43:40.846882   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 03:43:40.848489   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 03:43:40.848828   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 03:43:40.848828   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 03:43:40.849126   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 03:43:40.850135   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 03:43:40.850525   13548 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 03:43:40.850584   13548 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 03:43:40.850584   13548 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 03:43:40.851434   13548 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 03:43:41.055143   13548 provision.go:177] copyRemoteCerts
	I0624 03:43:41.076689   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 03:43:41.076840   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:43.089540   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:43.101004   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:43.101374   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:45.558149   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:45.558353   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:43:45.666513   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5898061s)
	I0624 03:43:45.666513   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 03:43:45.667084   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 03:43:45.707466   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 03:43:45.707879   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 03:43:45.754498   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 03:43:45.754928   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 03:43:45.799687   13548 provision.go:87] duration metric: took 13.9956771s to configureAuth
	I0624 03:43:45.799848   13548 buildroot.go:189] setting minikube options for container-runtime
	I0624 03:43:45.800451   13548 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 03:43:45.800585   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:47.883018   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:47.883131   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:50.351351   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:50.357456   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:50.358271   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:50.358271   13548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 03:43:50.502360   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 03:43:50.502593   13548 buildroot.go:70] root file system type: tmpfs
	I0624 03:43:50.502729   13548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 03:43:50.502897   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:52.615465   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:55.130780   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:55.141664   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:55.147641   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:55.148202   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:55.148202   13548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 03:43:55.309296   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 03:43:55.309402   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:57.335098   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:43:59.728173   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:43:59.740198   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:43:59.745881   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:43:59.746643   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:43:59.746643   13548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 03:43:59.916037   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 03:43:59.916037   13548 machine.go:97] duration metric: took 42.4144928s to provisionDockerMachine
	I0624 03:43:59.916037   13548 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 03:43:59.916037   13548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 03:43:59.931244   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 03:43:59.931244   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:02.064578   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:02.077266   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:04.541369   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:04.553462   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:04.553462   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:04.670950   13548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7396873s)
	I0624 03:44:04.687524   13548 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 03:44:04.695060   13548 command_runner.go:130] > NAME=Buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 03:44:04.695060   13548 command_runner.go:130] > ID=buildroot
	I0624 03:44:04.695060   13548 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 03:44:04.695060   13548 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 03:44:04.695291   13548 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 03:44:04.695356   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 03:44:04.695800   13548 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 03:44:04.696898   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 03:44:04.696947   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 03:44:04.697665   13548 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 03:44:04.697665   13548 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> /etc/test/nested/copy/944/hosts
	I0624 03:44:04.709431   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 03:44:04.731457   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 03:44:04.778782   13548 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 03:44:04.819557   13548 start.go:296] duration metric: took 4.9035006s for postStartSetup
	I0624 03:44:04.819557   13548 fix.go:56] duration metric: took 50.0555458s for fixHost
	I0624 03:44:04.819557   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:06.873368   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:06.884951   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:09.360619   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:09.371886   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:09.377894   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:09.378165   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:09.378165   13548 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 03:44:09.515487   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719225849.523515286
	
	I0624 03:44:09.515487   13548 fix.go:216] guest clock: 1719225849.523515286
	I0624 03:44:09.515487   13548 fix.go:229] Guest: 2024-06-24 03:44:09.523515286 -0700 PDT Remote: 2024-06-24 03:44:04.8195572 -0700 PDT m=+55.460499301 (delta=4.703958086s)
	I0624 03:44:09.516024   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:11.588037   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:11.588439   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:11.588554   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:14.105325   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:14.116712   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:14.122742   13548 main.go:141] libmachine: Using SSH client type: native
	I0624 03:44:14.123327   13548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 03:44:14.123327   13548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719225849
	I0624 03:44:14.257404   13548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 10:44:09 UTC 2024
	
	I0624 03:44:14.266480   13548 fix.go:236] clock set: Mon Jun 24 10:44:09 UTC 2024
	 (err=<nil>)
	I0624 03:44:14.266480   13548 start.go:83] releasing machines lock for "functional-094900", held for 59.5025727s
	I0624 03:44:14.266717   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:16.327085   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:18.792708   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:18.792794   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:18.798211   13548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 03:44:18.798211   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:18.807964   13548 ssh_runner.go:195] Run: cat /version.json
	I0624 03:44:18.807964   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:20.994976   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.583636   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.594622   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.625284   13548 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 03:44:23.627240   13548 main.go:141] libmachine: [stderr =====>] : 
	I0624 03:44:23.627537   13548 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 03:44:23.748668   13548 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9504365s)
	I0624 03:44:23.748668   13548 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 03:44:23.748668   13548 ssh_runner.go:235] Completed: cat /version.json: (4.9406839s)
	I0624 03:44:23.763177   13548 ssh_runner.go:195] Run: systemctl --version
	I0624 03:44:23.771997   13548 command_runner.go:130] > systemd 252 (252)
	I0624 03:44:23.772132   13548 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 03:44:23.784221   13548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 03:44:23.786750   13548 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 03:44:23.792724   13548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 03:44:23.806462   13548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 03:44:23.814714   13548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 03:44:23.814714   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:23.814714   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:23.855882   13548 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 03:44:23.869045   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 03:44:23.901633   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 03:44:23.920843   13548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 03:44:23.932273   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 03:44:23.966386   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:23.995112   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 03:44:24.024914   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 03:44:24.057915   13548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 03:44:24.090275   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 03:44:24.122390   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 03:44:24.150224   13548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 03:44:24.182847   13548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 03:44:24.198901   13548 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 03:44:24.210083   13548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 03:44:24.236503   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:24.467803   13548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 03:44:24.506745   13548 start.go:494] detecting cgroup driver to use...
	I0624 03:44:24.518868   13548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 03:44:24.544974   13548 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 03:44:24.545035   13548 command_runner.go:130] > [Unit]
	I0624 03:44:24.545035   13548 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 03:44:24.545114   13548 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 03:44:24.545114   13548 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 03:44:24.545114   13548 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitBurst=3
	I0624 03:44:24.545114   13548 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 03:44:24.545114   13548 command_runner.go:130] > [Service]
	I0624 03:44:24.545114   13548 command_runner.go:130] > Type=notify
	I0624 03:44:24.545175   13548 command_runner.go:130] > Restart=on-failure
	I0624 03:44:24.545175   13548 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 03:44:24.545258   13548 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 03:44:24.545258   13548 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 03:44:24.545258   13548 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 03:44:24.545258   13548 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 03:44:24.545356   13548 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 03:44:24.545356   13548 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 03:44:24.545356   13548 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 03:44:24.545356   13548 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 03:44:24.545417   13548 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 03:44:24.545484   13548 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 03:44:24.545542   13548 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNOFILE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitNPROC=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > LimitCORE=infinity
	I0624 03:44:24.545542   13548 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 03:44:24.545606   13548 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 03:44:24.545606   13548 command_runner.go:130] > TasksMax=infinity
	I0624 03:44:24.545606   13548 command_runner.go:130] > TimeoutStartSec=0
	I0624 03:44:24.545606   13548 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 03:44:24.545606   13548 command_runner.go:130] > Delegate=yes
	I0624 03:44:24.545665   13548 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 03:44:24.545665   13548 command_runner.go:130] > KillMode=process
	I0624 03:44:24.545665   13548 command_runner.go:130] > [Install]
	I0624 03:44:24.545665   13548 command_runner.go:130] > WantedBy=multi-user.target
	I0624 03:44:24.559163   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.591098   13548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 03:44:24.636389   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 03:44:24.676014   13548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 03:44:24.696137   13548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 03:44:24.732552   13548 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 03:44:24.747391   13548 ssh_runner.go:195] Run: which cri-dockerd
	I0624 03:44:24.754399   13548 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 03:44:24.766719   13548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 03:44:24.791004   13548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 03:44:24.838660   13548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 03:44:25.097098   13548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 03:44:25.321701   13548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 03:44:25.322016   13548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 03:44:25.365482   13548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 03:44:25.595720   13548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 03:45:36.971718   13548 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0624 03:45:36.971718   13548 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0624 03:45:36.971718   13548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3757195s)
	I0624 03:45:36.985018   13548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 03:45:37.023710   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.023822   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	I0624 03:45:37.023910   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.023944   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.024033   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024060   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024171   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024258   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024286   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024345   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.024452   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.024514   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.024571   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.024606   13548 command_runner.go:130] > Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.024673   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.024726   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.024821   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.024875   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.024923   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.024987   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025051   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025120   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.025214   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025271   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025327   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025370   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025433   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025501   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025586   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025628   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025691   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.025764   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.025868   13548 command_runner.go:130] > Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	I0624 03:45:37.025942   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.026013   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.026121   13548 command_runner.go:130] > Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.026220   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.026282   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.026340   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.026375   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.026458   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	I0624 03:45:37.026484   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.026556   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	I0624 03:45:37.026618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.026662   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.026739   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.026775   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026868   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.026928   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.026967   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027011   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027096   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027123   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.027209   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.027244   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.027334   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.027361   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.027434   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.027481   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.027502   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.027562   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027618   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027719   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027746   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027801   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027851   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027916   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027958   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.027980   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028031   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028097   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028194   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028221   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028287   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028339   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.028401   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.028456   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.028515   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.028607   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.028638   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.028685   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.028741   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.028792   13548 command_runner.go:130] > Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 03:45:37.028840   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029390   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029584   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029652   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029718   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 03:45:37.029770   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 03:45:37.029877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 03:45:37.029961   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 03:45:37.030031   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030108   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 03:45:37.030155   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 03:45:37.030190   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030260   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030290   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030505   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030587   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030649   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030710   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030760   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030829   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 03:45:37.030877   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 03:45:37.030951   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 03:45:37.031080   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	I0624 03:45:37.031171   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 03:45:37.031200   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	I0624 03:45:37.031243   13548 command_runner.go:130] > Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 03:45:37.031284   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031340   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.031877   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032159   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032223   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.032755   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.032998   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033049   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 03:45:37.033109   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	I0624 03:45:37.033178   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.033710   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.033999   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034135   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034192   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034309   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034524   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034577   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034629   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034678   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034746   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034800   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 03:45:37.034851   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0624 03:45:37.035584   13548 command_runner.go:130] > Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	I0624 03:45:37.063128   13548 out.go:177] 
	W0624 03:45:37.064618   13548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 03:45:37.066766   13548 out.go:239] * 
	W0624 03:45:37.068602   13548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 03:45:37.072455   13548 out.go:177] 
	
	
	==> Docker <==
	Jun 24 11:07:42 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:07:42Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
	Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 11:08:43 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:08:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T11:08:43Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.553719] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.185688] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.204617] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.735510] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:09:43 up 29 min,  0 users,  load average: 0.02, 0.05, 0.01
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 11:09:41 functional-094900 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:09:41 functional-094900 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.984881    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?resourceVersion=0&timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.985797    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.987042    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.988143    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.989659    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:09:42 functional-094900 kubelet[2131]: E0624 11:09:42.989775    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.399606    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.399757    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.399898    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400152    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400266    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400406    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400537    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: I0624 11:09:43.400579    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400698    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.400800    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.401515    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.401756    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.401960    2131 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.403833    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.404096    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.404811    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 24 11:09:43 functional-094900 kubelet[2131]: E0624 11:09:43.578957    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 25m18.509311428s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:07:07.378853    3924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:07:42.882242    3924 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:07:42.918240    3924 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:07:42.949147    3924 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:07:42.987533    3924 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:07:43.018536    3924 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:07:43.048045    3924 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:08:43.148918    3924 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:08:43.181414    3924 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (11.9378646s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:09:44.240148    2420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (180.59s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (301.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-094900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0624 04:11:25.070294     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-094900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m48.084753s)

                                                
                                                
-- stdout --
	* [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	* Updating the running hyperv "functional-094900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:09:56.167894    6272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
	Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:46:37 functional-094900 dockerd[4431]: time="2024-06-24T10:46:37.586824113Z" level=info msg="Starting up"
	Jun 24 10:47:37 functional-094900 dockerd[4431]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:47:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jun 24 10:47:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:47:37 functional-094900 dockerd[4653]: time="2024-06-24T10:47:37.862696025Z" level=info msg="Starting up"
	Jun 24 10:48:37 functional-094900 dockerd[4653]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:48:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
	Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jun 24 10:49:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:49:38 functional-094900 dockerd[5189]: time="2024-06-24T10:49:38.371622809Z" level=info msg="Starting up"
	Jun 24 10:50:38 functional-094900 dockerd[5189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:50:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jun 24 10:50:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:50:38 functional-094900 dockerd[5414]: time="2024-06-24T10:50:38.614330084Z" level=info msg="Starting up"
	Jun 24 10:51:38 functional-094900 dockerd[5414]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:51:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jun 24 10:51:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:51:38 functional-094900 dockerd[5673]: time="2024-06-24T10:51:38.883496088Z" level=info msg="Starting up"
	Jun 24 10:52:38 functional-094900 dockerd[5673]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:52:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jun 24 10:52:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:52:39 functional-094900 dockerd[5883]: time="2024-06-24T10:52:39.154000751Z" level=info msg="Starting up"
	Jun 24 10:53:39 functional-094900 dockerd[5883]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:53:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jun 24 10:53:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:53:39 functional-094900 dockerd[6106]: time="2024-06-24T10:53:39.378634263Z" level=info msg="Starting up"
	Jun 24 10:54:39 functional-094900 dockerd[6106]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:54:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jun 24 10:54:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:54:39 functional-094900 dockerd[6322]: time="2024-06-24T10:54:39.640816472Z" level=info msg="Starting up"
	Jun 24 10:55:39 functional-094900 dockerd[6322]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:55:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jun 24 10:55:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:55:39 functional-094900 dockerd[6552]: time="2024-06-24T10:55:39.883655191Z" level=info msg="Starting up"
	Jun 24 10:56:39 functional-094900 dockerd[6552]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:56:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jun 24 10:56:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:56:40 functional-094900 dockerd[6764]: time="2024-06-24T10:56:40.369690362Z" level=info msg="Starting up"
	Jun 24 10:57:40 functional-094900 dockerd[6764]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:57:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jun 24 10:57:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:57:40 functional-094900 dockerd[7001]: time="2024-06-24T10:57:40.640830194Z" level=info msg="Starting up"
	Jun 24 10:58:40 functional-094900 dockerd[7001]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:58:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jun 24 10:58:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:58:40 functional-094900 dockerd[7244]: time="2024-06-24T10:58:40.902491856Z" level=info msg="Starting up"
	Jun 24 10:59:40 functional-094900 dockerd[7244]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:59:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jun 24 10:59:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:59:41 functional-094900 dockerd[7488]: time="2024-06-24T10:59:41.167040582Z" level=info msg="Starting up"
	Jun 24 11:00:41 functional-094900 dockerd[7488]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:00:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jun 24 11:00:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:00:41 functional-094900 dockerd[7716]: time="2024-06-24T11:00:41.384363310Z" level=info msg="Starting up"
	Jun 24 11:01:41 functional-094900 dockerd[7716]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:01:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jun 24 11:01:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:01:41 functional-094900 dockerd[8019]: time="2024-06-24T11:01:41.637458699Z" level=info msg="Starting up"
	Jun 24 11:02:41 functional-094900 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:02:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jun 24 11:02:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:02:41 functional-094900 dockerd[8232]: time="2024-06-24T11:02:41.846453303Z" level=info msg="Starting up"
	Jun 24 11:03:41 functional-094900 dockerd[8232]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:03:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jun 24 11:03:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:03:42 functional-094900 dockerd[8446]: time="2024-06-24T11:03:42.087902952Z" level=info msg="Starting up"
	Jun 24 11:04:42 functional-094900 dockerd[8446]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
	Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jun 24 11:05:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:05:42 functional-094900 dockerd[8994]: time="2024-06-24T11:05:42.587871779Z" level=info msg="Starting up"
	Jun 24 11:06:42 functional-094900 dockerd[8994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:06:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jun 24 11:06:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:06:42 functional-094900 dockerd[9200]: time="2024-06-24T11:06:42.851146986Z" level=info msg="Starting up"
	Jun 24 11:07:42 functional-094900 dockerd[9200]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
	Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jun 24 11:08:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:08:43 functional-094900 dockerd[9748]: time="2024-06-24T11:08:43.371382553Z" level=info msg="Starting up"
	Jun 24 11:09:43 functional-094900 dockerd[9748]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:09:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jun 24 11:09:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:09:43 functional-094900 dockerd[9964]: time="2024-06-24T11:09:43.621132733Z" level=info msg="Starting up"
	Jun 24 11:10:43 functional-094900 dockerd[9964]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:10:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jun 24 11:10:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:10:43 functional-094900 dockerd[10356]: time="2024-06-24T11:10:43.885023688Z" level=info msg="Starting up"
	Jun 24 11:11:16 functional-094900 dockerd[10356]: time="2024-06-24T11:11:16.110406215Z" level=info msg="Processing signal 'terminated'"
	Jun 24 11:11:43 functional-094900 dockerd[10356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:11:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:11:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:11:43 functional-094900 dockerd[10783]: time="2024-06-24T11:11:43.985181384Z" level=info msg="Starting up"
	Jun 24 11:12:44 functional-094900 dockerd[10783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:12:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-094900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m48.2067468s for "functional-094900" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.8830285s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:12:44.397898    8928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
E0624 04:13:21.854744     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (1m48.7076772s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                                         | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                              |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache delete                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	| ssh     | functional-094900 ssh sudo                                               | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-094900                                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-094900 ssh                                                    | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache reload                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
	| ssh     | functional-094900 ssh                                                    | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                             | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:09 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 04:09:56
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 04:09:56.169374    6272 out.go:291] Setting OutFile to fd 776 ...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.169374    6272 out.go:304] Setting ErrFile to fd 1000...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.192082    6272 out.go:298] Setting JSON to false
	I0624 04:09:56.194083    6272 start.go:129] hostinfo: {"hostname":"minikube1","uptime":17851,"bootTime":1719209544,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 04:09:56.194083    6272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 04:09:56.199085    6272 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 04:09:56.201449    6272 notify.go:220] Checking for updates...
	I0624 04:09:56.201449    6272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:09:56.204617    6272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 04:09:56.207687    6272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 04:09:56.209721    6272 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 04:09:56.212428    6272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 04:09:56.216786    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:09:56.216786    6272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 04:10:01.520141    6272 out.go:177] * Using the hyperv driver based on existing profile
	I0624 04:10:01.523654    6272 start.go:297] selected driver: hyperv
	I0624 04:10:01.523654    6272 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.523654    6272 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 04:10:01.574064    6272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:10:01.574064    6272 cni.go:84] Creating CNI manager for ""
	I0624 04:10:01.574064    6272 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 04:10:01.574643    6272 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.574802    6272 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 04:10:01.580373    6272 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 04:10:01.582564    6272 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:10:01.582564    6272 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 04:10:01.582564    6272 cache.go:56] Caching tarball of preloaded images
	I0624 04:10:01.582564    6272 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:10:01.582564    6272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:10:01.582564    6272 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 04:10:01.584620    6272 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:10:01.584620    6272 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 04:10:01.585549    6272 start.go:96] Skipping create...Using existing machine configuration
	I0624 04:10:01.585549    6272 fix.go:54] fixHost starting: 
	I0624 04:10:01.585549    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:04.327490    6272 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 04:10:04.327490    6272 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 04:10:04.330864    6272 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 04:10:04.334348    6272 machine.go:94] provisionDockerMachine start ...
	I0624 04:10:04.334348    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:09.049416    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:09.049640    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:09.055373    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:09.056075    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:09.056075    6272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:10:09.184519    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:09.184704    6272 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 04:10:09.184799    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:11.277821    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:13.819687    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:13.820422    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:13.820422    6272 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 04:10:13.989233    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:13.989368    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:18.763676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:18.763776    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:18.763776    6272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:10:18.905084    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:18.905084    6272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:10:18.905084    6272 buildroot.go:174] setting up certificates
	I0624 04:10:18.905084    6272 provision.go:84] configureAuth start
	I0624 04:10:18.905084    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:23.658272    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:28.332135    6272 provision.go:143] copyHostCerts
	I0624 04:10:28.332962    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:10:28.332962    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:10:28.333499    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:10:28.334533    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:10:28.334533    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:10:28.334533    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:10:28.335905    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:10:28.335905    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:10:28.336542    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:10:28.337629    6272 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 04:10:28.909857    6272 provision.go:177] copyRemoteCerts
	I0624 04:10:28.919860    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:10:28.919860    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:33.573811    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:33.690262    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7703173s)
	I0624 04:10:33.690859    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 04:10:33.738657    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:10:33.785623    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:10:33.832878    6272 provision.go:87] duration metric: took 14.9277378s to configureAuth
	I0624 04:10:33.832878    6272 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:10:33.833584    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:10:33.833584    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:35.961242    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:35.962274    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:35.962335    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:38.447273    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:38.447542    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:38.453995    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:38.454680    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:38.454680    6272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:10:38.586052    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:10:38.586052    6272 buildroot.go:70] root file system type: tmpfs
	I0624 04:10:38.586603    6272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:10:38.586603    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:40.661150    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:40.662074    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:40.662133    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:43.184441    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:43.185079    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:43.191645    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:43.191810    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:43.191810    6272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:10:43.345253    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:10:43.345452    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:48.024920    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:48.024975    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:48.031261    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:48.031261    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:48.031261    6272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:10:48.188214    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:48.188214    6272 machine.go:97] duration metric: took 43.8537018s to provisionDockerMachine
	I0624 04:10:48.188214    6272 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 04:10:48.188214    6272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:10:48.202185    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:10:48.202185    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:52.814932    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:52.931376    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.729112s)
	I0624 04:10:52.942928    6272 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:10:52.949218    6272 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:10:52.949218    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:10:52.950127    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:10:52.951430    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:10:52.952592    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 04:10:52.962084    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 04:10:52.982953    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:10:53.027604    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 04:10:53.074856    6272 start.go:296] duration metric: took 4.8866228s for postStartSetup
	I0624 04:10:53.074856    6272 fix.go:56] duration metric: took 51.4891134s for fixHost
	I0624 04:10:53.074856    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:55.164624    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:57.701580    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:57.702374    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:57.702374    6272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719227457.843280300
	
	I0624 04:10:57.840765    6272 fix.go:216] guest clock: 1719227457.843280300
	I0624 04:10:57.840765    6272 fix.go:229] Guest: 2024-06-24 04:10:57.8432803 -0700 PDT Remote: 2024-06-24 04:10:53.0748563 -0700 PDT m=+56.992022601 (delta=4.768424s)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:59.988560    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:02.532676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:11:02.532676    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:11:02.532676    6272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719227457
	I0624 04:11:02.687106    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:10:57 UTC 2024
	
	I0624 04:11:02.687106    6272 fix.go:236] clock set: Mon Jun 24 11:10:57 UTC 2024
	 (err=<nil>)
	I0624 04:11:02.687106    6272 start.go:83] releasing machines lock for "functional-094900", held for 1m1.1022557s
	I0624 04:11:02.687652    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:07.279101    6272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:11:07.279135    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:07.291913    6272 ssh_runner.go:195] Run: cat /version.json
	I0624 04:11:07.291913    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.237571    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.260890    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: cat /version.json: (7.0503534s)
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0631653s)
	W0624 04:11:14.342293    6272 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0624 04:11:14.342293    6272 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0624 04:11:14.342293    6272 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0624 04:11:14.354665    6272 ssh_runner.go:195] Run: systemctl --version
	I0624 04:11:14.376249    6272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 04:11:14.386363    6272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:11:14.397260    6272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:11:14.415590    6272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 04:11:14.415590    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:14.415832    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:14.464291    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:11:14.496544    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:11:14.516006    6272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:11:14.525959    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:11:14.557998    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.589894    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:11:14.622466    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.658749    6272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:11:14.690692    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:11:14.724824    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:11:14.754263    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:11:14.784168    6272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:11:14.813679    6272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:11:14.846037    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:15.061547    6272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:11:15.095654    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:15.107262    6272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:11:15.141870    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.175611    6272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:11:15.219872    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.257821    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:11:15.281036    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:15.328376    6272 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:11:15.347052    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:11:15.364821    6272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:11:15.412796    6272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:11:15.618728    6272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:11:15.819205    6272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:11:15.819413    6272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:11:15.864903    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:16.082704    6272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:12:44.005774    6272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9226447s)
	I0624 04:12:44.018618    6272 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 04:12:44.094779    6272 out.go:177] 
	W0624 04:12:44.098077    6272 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
	Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:46:37 functional-094900 dockerd[4431]: time="2024-06-24T10:46:37.586824113Z" level=info msg="Starting up"
	Jun 24 10:47:37 functional-094900 dockerd[4431]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:47:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jun 24 10:47:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:47:37 functional-094900 dockerd[4653]: time="2024-06-24T10:47:37.862696025Z" level=info msg="Starting up"
	Jun 24 10:48:37 functional-094900 dockerd[4653]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:48:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
	Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jun 24 10:49:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:49:38 functional-094900 dockerd[5189]: time="2024-06-24T10:49:38.371622809Z" level=info msg="Starting up"
	Jun 24 10:50:38 functional-094900 dockerd[5189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:50:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jun 24 10:50:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:50:38 functional-094900 dockerd[5414]: time="2024-06-24T10:50:38.614330084Z" level=info msg="Starting up"
	Jun 24 10:51:38 functional-094900 dockerd[5414]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:51:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jun 24 10:51:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:51:38 functional-094900 dockerd[5673]: time="2024-06-24T10:51:38.883496088Z" level=info msg="Starting up"
	Jun 24 10:52:38 functional-094900 dockerd[5673]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:52:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jun 24 10:52:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:52:39 functional-094900 dockerd[5883]: time="2024-06-24T10:52:39.154000751Z" level=info msg="Starting up"
	Jun 24 10:53:39 functional-094900 dockerd[5883]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:53:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jun 24 10:53:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:53:39 functional-094900 dockerd[6106]: time="2024-06-24T10:53:39.378634263Z" level=info msg="Starting up"
	Jun 24 10:54:39 functional-094900 dockerd[6106]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:54:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jun 24 10:54:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:54:39 functional-094900 dockerd[6322]: time="2024-06-24T10:54:39.640816472Z" level=info msg="Starting up"
	Jun 24 10:55:39 functional-094900 dockerd[6322]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:55:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jun 24 10:55:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:55:39 functional-094900 dockerd[6552]: time="2024-06-24T10:55:39.883655191Z" level=info msg="Starting up"
	Jun 24 10:56:39 functional-094900 dockerd[6552]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:56:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jun 24 10:56:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:56:40 functional-094900 dockerd[6764]: time="2024-06-24T10:56:40.369690362Z" level=info msg="Starting up"
	Jun 24 10:57:40 functional-094900 dockerd[6764]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:57:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jun 24 10:57:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:57:40 functional-094900 dockerd[7001]: time="2024-06-24T10:57:40.640830194Z" level=info msg="Starting up"
	Jun 24 10:58:40 functional-094900 dockerd[7001]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:58:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jun 24 10:58:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:58:40 functional-094900 dockerd[7244]: time="2024-06-24T10:58:40.902491856Z" level=info msg="Starting up"
	Jun 24 10:59:40 functional-094900 dockerd[7244]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:59:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jun 24 10:59:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:59:41 functional-094900 dockerd[7488]: time="2024-06-24T10:59:41.167040582Z" level=info msg="Starting up"
	Jun 24 11:00:41 functional-094900 dockerd[7488]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:00:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jun 24 11:00:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:00:41 functional-094900 dockerd[7716]: time="2024-06-24T11:00:41.384363310Z" level=info msg="Starting up"
	Jun 24 11:01:41 functional-094900 dockerd[7716]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:01:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jun 24 11:01:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:01:41 functional-094900 dockerd[8019]: time="2024-06-24T11:01:41.637458699Z" level=info msg="Starting up"
	Jun 24 11:02:41 functional-094900 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:02:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jun 24 11:02:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:02:41 functional-094900 dockerd[8232]: time="2024-06-24T11:02:41.846453303Z" level=info msg="Starting up"
	Jun 24 11:03:41 functional-094900 dockerd[8232]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:03:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jun 24 11:03:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:03:42 functional-094900 dockerd[8446]: time="2024-06-24T11:03:42.087902952Z" level=info msg="Starting up"
	Jun 24 11:04:42 functional-094900 dockerd[8446]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
	Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jun 24 11:05:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:05:42 functional-094900 dockerd[8994]: time="2024-06-24T11:05:42.587871779Z" level=info msg="Starting up"
	Jun 24 11:06:42 functional-094900 dockerd[8994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:06:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jun 24 11:06:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:06:42 functional-094900 dockerd[9200]: time="2024-06-24T11:06:42.851146986Z" level=info msg="Starting up"
	Jun 24 11:07:42 functional-094900 dockerd[9200]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
	Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jun 24 11:08:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:08:43 functional-094900 dockerd[9748]: time="2024-06-24T11:08:43.371382553Z" level=info msg="Starting up"
	Jun 24 11:09:43 functional-094900 dockerd[9748]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:09:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jun 24 11:09:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:09:43 functional-094900 dockerd[9964]: time="2024-06-24T11:09:43.621132733Z" level=info msg="Starting up"
	Jun 24 11:10:43 functional-094900 dockerd[9964]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:10:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jun 24 11:10:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:10:43 functional-094900 dockerd[10356]: time="2024-06-24T11:10:43.885023688Z" level=info msg="Starting up"
	Jun 24 11:11:16 functional-094900 dockerd[10356]: time="2024-06-24T11:11:16.110406215Z" level=info msg="Processing signal 'terminated'"
	Jun 24 11:11:43 functional-094900 dockerd[10356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:11:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:11:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:11:43 functional-094900 dockerd[10783]: time="2024-06-24T11:11:43.985181384Z" level=info msg="Starting up"
	Jun 24 11:12:44 functional-094900 dockerd[10783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:12:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 04:12:44.099441    6272 out.go:239] * 
	W0624 04:12:44.101667    6272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 04:12:44.115005    6272 out.go:177] 
	
	
	==> Docker <==
	Jun 24 11:12:44 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:12:44 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:12:44 functional-094900 dockerd[10987]: time="2024-06-24T11:12:44.242073597Z" level=info msg="Starting up"
	Jun 24 11:13:44 functional-094900 dockerd[10987]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:13:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 11:13:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:13:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:13:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:13:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 11:13:44 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 11:13:44 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:13:44 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T11:13:46Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	[Jun24 11:11] systemd-fstab-generator[10619]: Ignoring "noauto" option for root device
	[  +0.557846] systemd-fstab-generator[10655]: Ignoring "noauto" option for root device
	[  +0.211796] systemd-fstab-generator[10680]: Ignoring "noauto" option for root device
	[  +0.243443] systemd-fstab-generator[10694]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 11:14:44 up 34 min,  0 users,  load average: 0.16, 0.05, 0.01
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 11:14:40 functional-094900 kubelet[2131]: I0624 11:14:40.989580    2131 status_manager.go:853] "Failed to get status for pod" podUID="19830515-9c0e-40b4-aa6e-9a097e95269b" pod="kube-system/coredns-7db6d8ff4d-59snf" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-59snf\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:14:41 functional-094900 kubelet[2131]: E0624 11:14:41.038618    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused" interval="7s"
	Jun 24 11:14:41 functional-094900 kubelet[2131]: E0624 11:14:41.062148    2131 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:14:41 functional-094900 kubelet[2131]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:14:41 functional-094900 kubelet[2131]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:14:41 functional-094900 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:14:41 functional-094900 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:14:43 functional-094900 kubelet[2131]: E0624 11:14:43.635610    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 30m18.566086824s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.634634    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.634724    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: I0624 11:14:44.634756    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.634827    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.635078    2131 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.635235    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.635782    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.636007    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.636329    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.636686    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.636712    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.636727    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.635096    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.638275    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.638840    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.639672    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 24 11:14:44 functional-094900 kubelet[2131]: E0624 11:14:44.645881    2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.31.208.115:8441: connect: connection refused" event="&Event{ObjectMeta:{coredns-7db6d8ff4d-59snf.17dbead45006e73e  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7db6d8ff4d-59snf,UID:19830515-9c0e-40b4-aa6e-9a097e95269b,APIVersion:v1,ResourceVersion:361,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://10.244.0.2:8080/health\": dial tcp 10.244.0.2:8080: connect: no route to host,Source:EventSource{Component:kubelet,Host:functional-094900,},FirstTimestamp:2024-06-24 10:44:40.368572222 +0000 UTC m=+119.599785328,LastTimestamp:2024-06-24 10:44:40.368572222 +0000 UTC m=+119.599785328,Count:1,Type:Warn
ing,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-094900,}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:12:56.255373    1176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:13:44.267576    1176 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.302686    1176 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.335670    1176 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.371415    1176 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.401117    1176 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.432446    1176 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.462254    1176 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:13:44.491389    1176 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (11.9571086s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:14:45.346494    5444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (301.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (180.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-094900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-094900 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (10.2607007s)

                                                
                                                
** stderr ** 
	E0624 04:14:59.379499    6364 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:15:01.402703    6364 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:15:03.420088    6364 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:15:05.431958    6364 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	E0624 04:15:07.463252    6364 memcache.go:265] couldn't get current server API group list: Get "https://172.31.208.115:8441/api?timeout=32s": dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.31.208.115:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-094900 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-094900 -n functional-094900: exit status 2 (11.1258814s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:15:07.564943   10852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 logs -n 25: (2m27.0193962s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                  | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-998200                                                         | nospam-998200     | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                              | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                              |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache delete                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	| ssh     | functional-094900 ssh sudo                                               | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-094900                                                        | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-094900 ssh                                                    | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-094900 cache reload                                           | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
	| ssh     | functional-094900 ssh                                                    | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                             | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:09 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 04:09:56
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 04:09:56.169374    6272 out.go:291] Setting OutFile to fd 776 ...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.169374    6272 out.go:304] Setting ErrFile to fd 1000...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.192082    6272 out.go:298] Setting JSON to false
	I0624 04:09:56.194083    6272 start.go:129] hostinfo: {"hostname":"minikube1","uptime":17851,"bootTime":1719209544,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 04:09:56.194083    6272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 04:09:56.199085    6272 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 04:09:56.201449    6272 notify.go:220] Checking for updates...
	I0624 04:09:56.201449    6272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:09:56.204617    6272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 04:09:56.207687    6272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 04:09:56.209721    6272 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 04:09:56.212428    6272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 04:09:56.216786    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:09:56.216786    6272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 04:10:01.520141    6272 out.go:177] * Using the hyperv driver based on existing profile
	I0624 04:10:01.523654    6272 start.go:297] selected driver: hyperv
	I0624 04:10:01.523654    6272 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.523654    6272 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 04:10:01.574064    6272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:10:01.574064    6272 cni.go:84] Creating CNI manager for ""
	I0624 04:10:01.574064    6272 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 04:10:01.574643    6272 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.574802    6272 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 04:10:01.580373    6272 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 04:10:01.582564    6272 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:10:01.582564    6272 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 04:10:01.582564    6272 cache.go:56] Caching tarball of preloaded images
	I0624 04:10:01.582564    6272 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:10:01.582564    6272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:10:01.582564    6272 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 04:10:01.584620    6272 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:10:01.584620    6272 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 04:10:01.585549    6272 start.go:96] Skipping create...Using existing machine configuration
	I0624 04:10:01.585549    6272 fix.go:54] fixHost starting: 
	I0624 04:10:01.585549    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:04.327490    6272 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 04:10:04.327490    6272 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 04:10:04.330864    6272 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 04:10:04.334348    6272 machine.go:94] provisionDockerMachine start ...
	I0624 04:10:04.334348    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:09.049416    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:09.049640    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:09.055373    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:09.056075    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:09.056075    6272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:10:09.184519    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:09.184704    6272 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 04:10:09.184799    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:11.277821    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:13.819687    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:13.820422    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:13.820422    6272 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 04:10:13.989233    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:13.989368    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:18.763676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:18.763776    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:18.763776    6272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:10:18.905084    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:18.905084    6272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:10:18.905084    6272 buildroot.go:174] setting up certificates
	I0624 04:10:18.905084    6272 provision.go:84] configureAuth start
	I0624 04:10:18.905084    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:23.658272    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:28.332135    6272 provision.go:143] copyHostCerts
	I0624 04:10:28.332962    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:10:28.332962    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:10:28.333499    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:10:28.334533    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:10:28.334533    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:10:28.334533    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:10:28.335905    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:10:28.335905    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:10:28.336542    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:10:28.337629    6272 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 04:10:28.909857    6272 provision.go:177] copyRemoteCerts
	I0624 04:10:28.919860    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:10:28.919860    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:33.573811    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:33.690262    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7703173s)
	I0624 04:10:33.690859    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 04:10:33.738657    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:10:33.785623    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:10:33.832878    6272 provision.go:87] duration metric: took 14.9277378s to configureAuth
	I0624 04:10:33.832878    6272 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:10:33.833584    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:10:33.833584    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:35.961242    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:35.962274    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:35.962335    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:38.447273    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:38.447542    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:38.453995    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:38.454680    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:38.454680    6272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:10:38.586052    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:10:38.586052    6272 buildroot.go:70] root file system type: tmpfs
	I0624 04:10:38.586603    6272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:10:38.586603    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:40.661150    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:40.662074    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:40.662133    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:43.184441    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:43.185079    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:43.191645    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:43.191810    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:43.191810    6272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:10:43.345253    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:10:43.345452    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:48.024920    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:48.024975    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:48.031261    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:48.031261    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:48.031261    6272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:10:48.188214    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:48.188214    6272 machine.go:97] duration metric: took 43.8537018s to provisionDockerMachine
	I0624 04:10:48.188214    6272 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 04:10:48.188214    6272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:10:48.202185    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:10:48.202185    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:52.814932    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:52.931376    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.729112s)
	I0624 04:10:52.942928    6272 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:10:52.949218    6272 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:10:52.949218    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:10:52.950127    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:10:52.951430    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:10:52.952592    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 04:10:52.962084    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 04:10:52.982953    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:10:53.027604    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 04:10:53.074856    6272 start.go:296] duration metric: took 4.8866228s for postStartSetup
	I0624 04:10:53.074856    6272 fix.go:56] duration metric: took 51.4891134s for fixHost
	I0624 04:10:53.074856    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:55.164624    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:57.701580    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:57.702374    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:57.702374    6272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719227457.843280300
	
	I0624 04:10:57.840765    6272 fix.go:216] guest clock: 1719227457.843280300
	I0624 04:10:57.840765    6272 fix.go:229] Guest: 2024-06-24 04:10:57.8432803 -0700 PDT Remote: 2024-06-24 04:10:53.0748563 -0700 PDT m=+56.992022601 (delta=4.768424s)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:59.988560    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:02.532676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:11:02.532676    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:11:02.532676    6272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719227457
	I0624 04:11:02.687106    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:10:57 UTC 2024
	
	I0624 04:11:02.687106    6272 fix.go:236] clock set: Mon Jun 24 11:10:57 UTC 2024
	 (err=<nil>)
	I0624 04:11:02.687106    6272 start.go:83] releasing machines lock for "functional-094900", held for 1m1.1022557s
	I0624 04:11:02.687652    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:07.279101    6272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:11:07.279135    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:07.291913    6272 ssh_runner.go:195] Run: cat /version.json
	I0624 04:11:07.291913    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.237571    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.260890    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: cat /version.json: (7.0503534s)
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0631653s)
	W0624 04:11:14.342293    6272 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0624 04:11:14.342293    6272 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0624 04:11:14.342293    6272 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0624 04:11:14.354665    6272 ssh_runner.go:195] Run: systemctl --version
	I0624 04:11:14.376249    6272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 04:11:14.386363    6272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:11:14.397260    6272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:11:14.415590    6272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 04:11:14.415590    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:14.415832    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:14.464291    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:11:14.496544    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:11:14.516006    6272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:11:14.525959    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:11:14.557998    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.589894    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:11:14.622466    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.658749    6272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:11:14.690692    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:11:14.724824    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:11:14.754263    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:11:14.784168    6272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:11:14.813679    6272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:11:14.846037    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:15.061547    6272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:11:15.095654    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:15.107262    6272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:11:15.141870    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.175611    6272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:11:15.219872    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.257821    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:11:15.281036    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:15.328376    6272 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:11:15.347052    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:11:15.364821    6272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:11:15.412796    6272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:11:15.618728    6272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:11:15.819205    6272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:11:15.819413    6272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:11:15.864903    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:16.082704    6272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:12:44.005774    6272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9226447s)
	I0624 04:12:44.018618    6272 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 04:12:44.094779    6272 out.go:177] 
	W0624 04:12:44.098077    6272 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
	Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:46:37 functional-094900 dockerd[4431]: time="2024-06-24T10:46:37.586824113Z" level=info msg="Starting up"
	Jun 24 10:47:37 functional-094900 dockerd[4431]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:47:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jun 24 10:47:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:47:37 functional-094900 dockerd[4653]: time="2024-06-24T10:47:37.862696025Z" level=info msg="Starting up"
	Jun 24 10:48:37 functional-094900 dockerd[4653]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:48:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
	Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jun 24 10:49:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:49:38 functional-094900 dockerd[5189]: time="2024-06-24T10:49:38.371622809Z" level=info msg="Starting up"
	Jun 24 10:50:38 functional-094900 dockerd[5189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:50:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jun 24 10:50:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:50:38 functional-094900 dockerd[5414]: time="2024-06-24T10:50:38.614330084Z" level=info msg="Starting up"
	Jun 24 10:51:38 functional-094900 dockerd[5414]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:51:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jun 24 10:51:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:51:38 functional-094900 dockerd[5673]: time="2024-06-24T10:51:38.883496088Z" level=info msg="Starting up"
	Jun 24 10:52:38 functional-094900 dockerd[5673]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:52:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jun 24 10:52:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:52:39 functional-094900 dockerd[5883]: time="2024-06-24T10:52:39.154000751Z" level=info msg="Starting up"
	Jun 24 10:53:39 functional-094900 dockerd[5883]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:53:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jun 24 10:53:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:53:39 functional-094900 dockerd[6106]: time="2024-06-24T10:53:39.378634263Z" level=info msg="Starting up"
	Jun 24 10:54:39 functional-094900 dockerd[6106]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:54:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jun 24 10:54:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:54:39 functional-094900 dockerd[6322]: time="2024-06-24T10:54:39.640816472Z" level=info msg="Starting up"
	Jun 24 10:55:39 functional-094900 dockerd[6322]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:55:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jun 24 10:55:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:55:39 functional-094900 dockerd[6552]: time="2024-06-24T10:55:39.883655191Z" level=info msg="Starting up"
	Jun 24 10:56:39 functional-094900 dockerd[6552]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:56:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jun 24 10:56:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:56:40 functional-094900 dockerd[6764]: time="2024-06-24T10:56:40.369690362Z" level=info msg="Starting up"
	Jun 24 10:57:40 functional-094900 dockerd[6764]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:57:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jun 24 10:57:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:57:40 functional-094900 dockerd[7001]: time="2024-06-24T10:57:40.640830194Z" level=info msg="Starting up"
	Jun 24 10:58:40 functional-094900 dockerd[7001]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:58:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jun 24 10:58:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:58:40 functional-094900 dockerd[7244]: time="2024-06-24T10:58:40.902491856Z" level=info msg="Starting up"
	Jun 24 10:59:40 functional-094900 dockerd[7244]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:59:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jun 24 10:59:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:59:41 functional-094900 dockerd[7488]: time="2024-06-24T10:59:41.167040582Z" level=info msg="Starting up"
	Jun 24 11:00:41 functional-094900 dockerd[7488]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:00:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jun 24 11:00:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:00:41 functional-094900 dockerd[7716]: time="2024-06-24T11:00:41.384363310Z" level=info msg="Starting up"
	Jun 24 11:01:41 functional-094900 dockerd[7716]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:01:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jun 24 11:01:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:01:41 functional-094900 dockerd[8019]: time="2024-06-24T11:01:41.637458699Z" level=info msg="Starting up"
	Jun 24 11:02:41 functional-094900 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:02:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jun 24 11:02:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:02:41 functional-094900 dockerd[8232]: time="2024-06-24T11:02:41.846453303Z" level=info msg="Starting up"
	Jun 24 11:03:41 functional-094900 dockerd[8232]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:03:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jun 24 11:03:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:03:42 functional-094900 dockerd[8446]: time="2024-06-24T11:03:42.087902952Z" level=info msg="Starting up"
	Jun 24 11:04:42 functional-094900 dockerd[8446]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
	Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jun 24 11:05:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:05:42 functional-094900 dockerd[8994]: time="2024-06-24T11:05:42.587871779Z" level=info msg="Starting up"
	Jun 24 11:06:42 functional-094900 dockerd[8994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:06:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jun 24 11:06:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:06:42 functional-094900 dockerd[9200]: time="2024-06-24T11:06:42.851146986Z" level=info msg="Starting up"
	Jun 24 11:07:42 functional-094900 dockerd[9200]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
	Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jun 24 11:08:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:08:43 functional-094900 dockerd[9748]: time="2024-06-24T11:08:43.371382553Z" level=info msg="Starting up"
	Jun 24 11:09:43 functional-094900 dockerd[9748]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:09:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jun 24 11:09:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:09:43 functional-094900 dockerd[9964]: time="2024-06-24T11:09:43.621132733Z" level=info msg="Starting up"
	Jun 24 11:10:43 functional-094900 dockerd[9964]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:10:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jun 24 11:10:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:10:43 functional-094900 dockerd[10356]: time="2024-06-24T11:10:43.885023688Z" level=info msg="Starting up"
	Jun 24 11:11:16 functional-094900 dockerd[10356]: time="2024-06-24T11:11:16.110406215Z" level=info msg="Processing signal 'terminated'"
	Jun 24 11:11:43 functional-094900 dockerd[10356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:11:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:11:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:11:43 functional-094900 dockerd[10783]: time="2024-06-24T11:11:43.985181384Z" level=info msg="Starting up"
	Jun 24 11:12:44 functional-094900 dockerd[10783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:12:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 04:12:44.099441    6272 out.go:239] * 
	W0624 04:12:44.101667    6272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 04:12:44.115005    6272 out.go:177] 
	
	
	==> Docker <==
	Jun 24 11:15:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:15:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 11:15:44 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:15:44Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 11:15:45 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 11:15:45 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:15:45 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:15:45 functional-094900 dockerd[11810]: time="2024-06-24T11:15:45.140670064Z" level=info msg="Starting up"
	Jun 24 11:16:45 functional-094900 dockerd[11810]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:16:45 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:16:45 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:16:45 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3'"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="error getting RW layer size for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:16:45 functional-094900 cri-dockerd[1233]: time="2024-06-24T11:16:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-24T11:16:47Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.187185] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.172305] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.249699] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +2.273583] hrtimer: interrupt took 3445308 ns
	[  +6.355573] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.097526] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.294817] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.662654] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +6.038523] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +0.085831] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.020958] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.142371] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.829599] systemd-fstab-generator[2363]: Ignoring "noauto" option for root device
	[  +0.180243] kauditd_printk_skb: 12 callbacks suppressed
	[Jun24 10:43] kauditd_printk_skb: 69 callbacks suppressed
	[Jun24 10:44] systemd-fstab-generator[3443]: Ignoring "noauto" option for root device
	[  +0.145345] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.470310] systemd-fstab-generator[3478]: Ignoring "noauto" option for root device
	[  +0.258974] systemd-fstab-generator[3490]: Ignoring "noauto" option for root device
	[  +0.245072] systemd-fstab-generator[3505]: Ignoring "noauto" option for root device
	[  +5.313538] kauditd_printk_skb: 89 callbacks suppressed
	[Jun24 11:11] systemd-fstab-generator[10619]: Ignoring "noauto" option for root device
	[  +0.557846] systemd-fstab-generator[10655]: Ignoring "noauto" option for root device
	[  +0.211796] systemd-fstab-generator[10680]: Ignoring "noauto" option for root device
	[  +0.243443] systemd-fstab-generator[10694]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 11:17:45 up 37 min,  0 users,  load average: 0.02, 0.03, 0.00
	Linux functional-094900 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 24 11:17:41 functional-094900 kubelet[2131]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:17:41 functional-094900 kubelet[2131]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.161105    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?resourceVersion=0&timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.162082    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.163118    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.164611    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.165720    2131 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-094900\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused"
	Jun 24 11:17:42 functional-094900 kubelet[2131]: E0624 11:17:42.165812    2131 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 24 11:17:43 functional-094900 kubelet[2131]: E0624 11:17:43.103072    2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-094900?timeout=10s\": dial tcp 172.31.208.115:8441: connect: connection refused" interval="7s"
	Jun 24 11:17:43 functional-094900 kubelet[2131]: E0624 11:17:43.666657    2131 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 33m18.597142202s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386012    2131 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386057    2131 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386114    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386143    2131 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386164    2131 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386189    2131 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: I0624 11:17:45.386201    2131 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386237    2131 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386255    2131 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386269    2131 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386291    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.386316    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.387328    2131 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.387452    2131 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 24 11:17:45 functional-094900 kubelet[2131]: E0624 11:17:45.388081    2131 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:15:18.693074    7932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:15:44.887766    7932 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:15:44.920028    7932 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:15:44.950524    7932 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:15:44.978142    7932 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:15:45.006822    7932 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:15:45.037466    7932 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:16:45.166804    7932 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:16:45.200613    7932 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-094900 -n functional-094900: exit status 2 (11.230485s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:17:46.137439    6400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-094900" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (180.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (107s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 logs
E0624 04:18:21.848585     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-094900 logs: exit status 1 (1m46.2707082s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | binary-mirror-877500                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:61584                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-877500                                                                     | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:21 PDT |
	| addons  | disable dashboard -p                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-517800 --wait=true                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:28 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
	| ip      | addons-517800 ip                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-517800 ssh cat                                                                       | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
	|         | /opt/local-path-provisioner/pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:30 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| ssh     | addons-517800 ssh curl -s                                                                   | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-517800 ip                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
	|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:32 PDT |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| stop    | -p addons-517800                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	| addons  | enable dashboard -p                                                                         | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:33 PDT |
	|         | addons-517800                                                                               |                      |                   |         |                     |                     |
	| delete  | -p addons-517800                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:33 PDT |
	| start   | -p nospam-998200 -n=1 --memory=2250 --wait=false                                            | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:36 PDT |
	|         | --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                       |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT |                     |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| delete  | -p nospam-998200                                                                            | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
	| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
	|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
	| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                                                 |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache delete                                                              | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | minikube-local-cache-test:functional-094900                                                 |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | list                                                                                        | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
	| ssh     | functional-094900 ssh sudo                                                                  | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | crictl images                                                                               |                      |                   |         |                     |                     |
	| ssh     | functional-094900                                                                           | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
	|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| ssh     | functional-094900 ssh                                                                       | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-094900 cache reload                                                              | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
	| ssh     | functional-094900 ssh                                                                       | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                                                | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                                                 |                      |                   |         |                     |                     |
	|         | get pods                                                                                    |                      |                   |         |                     |                     |
	| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:09 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
	|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 04:09:56
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 04:09:56.169374    6272 out.go:291] Setting OutFile to fd 776 ...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.169374    6272 out.go:304] Setting ErrFile to fd 1000...
	I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:09:56.192082    6272 out.go:298] Setting JSON to false
	I0624 04:09:56.194083    6272 start.go:129] hostinfo: {"hostname":"minikube1","uptime":17851,"bootTime":1719209544,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 04:09:56.194083    6272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 04:09:56.199085    6272 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 04:09:56.201449    6272 notify.go:220] Checking for updates...
	I0624 04:09:56.201449    6272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:09:56.204617    6272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 04:09:56.207687    6272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 04:09:56.209721    6272 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 04:09:56.212428    6272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 04:09:56.216786    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:09:56.216786    6272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 04:10:01.520141    6272 out.go:177] * Using the hyperv driver based on existing profile
	I0624 04:10:01.523654    6272 start.go:297] selected driver: hyperv
	I0624 04:10:01.523654    6272 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.523654    6272 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 04:10:01.574064    6272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:10:01.574064    6272 cni.go:84] Creating CNI manager for ""
	I0624 04:10:01.574064    6272 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 04:10:01.574643    6272 start.go:340] cluster config:
	{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:10:01.574802    6272 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 04:10:01.580373    6272 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
	I0624 04:10:01.582564    6272 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:10:01.582564    6272 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 04:10:01.582564    6272 cache.go:56] Caching tarball of preloaded images
	I0624 04:10:01.582564    6272 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:10:01.582564    6272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:10:01.582564    6272 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
	I0624 04:10:01.584620    6272 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:10:01.584620    6272 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
	I0624 04:10:01.585549    6272 start.go:96] Skipping create...Using existing machine configuration
	I0624 04:10:01.585549    6272 fix.go:54] fixHost starting: 
	I0624 04:10:01.585549    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:04.327490    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:04.327490    6272 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
	W0624 04:10:04.327490    6272 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 04:10:04.330864    6272 out.go:177] * Updating the running hyperv "functional-094900" VM ...
	I0624 04:10:04.334348    6272 machine.go:94] provisionDockerMachine start ...
	I0624 04:10:04.334348    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:06.500727    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:09.049416    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:09.049640    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:09.055373    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:09.056075    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:09.056075    6272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:10:09.184519    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:09.184704    6272 buildroot.go:166] provisioning hostname "functional-094900"
	I0624 04:10:09.184799    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:11.277821    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:11.278790    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:13.814522    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:13.819687    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:13.820422    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:13.820422    6272 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
	I0624 04:10:13.989233    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900
	
	I0624 04:10:13.989368    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:16.156521    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:18.756341    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:18.763676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:18.763776    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:18.763776    6272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:10:18.905084    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:18.905084    6272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:10:18.905084    6272 buildroot.go:174] setting up certificates
	I0624 04:10:18.905084    6272 provision.go:84] configureAuth start
	I0624 04:10:18.905084    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:21.059954    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:23.658050    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:23.658272    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:25.793188    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:28.332135    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:28.332135    6272 provision.go:143] copyHostCerts
	I0624 04:10:28.332962    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:10:28.332962    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:10:28.333499    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:10:28.334533    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:10:28.334533    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:10:28.334533    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:10:28.335905    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:10:28.335905    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:10:28.336542    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:10:28.337629    6272 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
	I0624 04:10:28.909857    6272 provision.go:177] copyRemoteCerts
	I0624 04:10:28.919860    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:10:28.919860    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:31.039795    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:33.573514    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:33.573811    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:33.690262    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7703173s)
	I0624 04:10:33.690859    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0624 04:10:33.738657    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:10:33.785623    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:10:33.832878    6272 provision.go:87] duration metric: took 14.9277378s to configureAuth
	I0624 04:10:33.832878    6272 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:10:33.833584    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:10:33.833584    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:35.961242    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:35.962274    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:35.962335    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:38.447273    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:38.447542    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:38.453995    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:38.454680    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:38.454680    6272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:10:38.586052    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:10:38.586052    6272 buildroot.go:70] root file system type: tmpfs
	I0624 04:10:38.586603    6272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:10:38.586603    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:40.661150    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:40.662074    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:40.662133    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:43.184441    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:43.185079    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:43.191645    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:43.191810    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:43.191810    6272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:10:43.345253    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:10:43.345452    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:45.417488    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:48.024920    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:48.024975    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:48.031261    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:48.031261    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:48.031261    6272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:10:48.188214    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:10:48.188214    6272 machine.go:97] duration metric: took 43.8537018s to provisionDockerMachine
	I0624 04:10:48.188214    6272 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
	I0624 04:10:48.188214    6272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:10:48.202185    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:10:48.202185    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:50.300292    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:52.814556    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:52.814932    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:10:52.931376    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.729112s)
	I0624 04:10:52.942928    6272 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:10:52.949218    6272 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:10:52.949218    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:10:52.950127    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:10:52.951430    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:10:52.952592    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
	I0624 04:10:52.962084    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
	I0624 04:10:52.982953    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:10:53.027604    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
	I0624 04:10:53.074856    6272 start.go:296] duration metric: took 4.8866228s for postStartSetup
	I0624 04:10:53.074856    6272 fix.go:56] duration metric: took 51.4891134s for fixHost
	I0624 04:10:53.074856    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:55.164375    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:55.164624    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:10:57.696078    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:57.701580    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:10:57.702374    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:10:57.702374    6272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719227457.843280300
	
	I0624 04:10:57.840765    6272 fix.go:216] guest clock: 1719227457.843280300
	I0624 04:10:57.840765    6272 fix.go:229] Guest: 2024-06-24 04:10:57.8432803 -0700 PDT Remote: 2024-06-24 04:10:53.0748563 -0700 PDT m=+56.992022601 (delta=4.768424s)
	I0624 04:10:57.840765    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:10:59.988153    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:10:59.988560    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:02.526188    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:02.532676    6272 main.go:141] libmachine: Using SSH client type: native
	I0624 04:11:02.532676    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
	I0624 04:11:02.532676    6272 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719227457
	I0624 04:11:02.687106    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:10:57 UTC 2024
	
	I0624 04:11:02.687106    6272 fix.go:236] clock set: Mon Jun 24 11:10:57 UTC 2024
	 (err=<nil>)
	I0624 04:11:02.687106    6272 start.go:83] releasing machines lock for "functional-094900", held for 1m1.1022557s
	I0624 04:11:02.687652    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:04.752819    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:07.273819    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:07.279101    6272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:11:07.279135    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:07.291913    6272 ssh_runner.go:195] Run: cat /version.json
	I0624 04:11:07.291913    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:11:09.528377    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.237155    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.237571    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115
	
	I0624 04:11:12.260142    6272 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:11:12.260890    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: cat /version.json: (7.0503534s)
	I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0631653s)
	W0624 04:11:14.342293    6272 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0624 04:11:14.342293    6272 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0624 04:11:14.342293    6272 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0624 04:11:14.354665    6272 ssh_runner.go:195] Run: systemctl --version
	I0624 04:11:14.376249    6272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 04:11:14.386363    6272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:11:14.397260    6272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:11:14.415590    6272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0624 04:11:14.415590    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:14.415832    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:14.464291    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:11:14.496544    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:11:14.516006    6272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:11:14.525959    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:11:14.557998    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.589894    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:11:14.622466    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:11:14.658749    6272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:11:14.690692    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:11:14.724824    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:11:14.754263    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:11:14.784168    6272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:11:14.813679    6272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:11:14.846037    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:15.061547    6272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:11:15.095654    6272 start.go:494] detecting cgroup driver to use...
	I0624 04:11:15.107262    6272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:11:15.141870    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.175611    6272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:11:15.219872    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:11:15.257821    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:11:15.281036    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:11:15.328376    6272 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:11:15.347052    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:11:15.364821    6272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:11:15.412796    6272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:11:15.618728    6272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:11:15.819205    6272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:11:15.819413    6272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:11:15.864903    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:11:16.082704    6272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:12:44.005774    6272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9226447s)
	I0624 04:12:44.018618    6272 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0624 04:12:44.094779    6272 out.go:177] 
	W0624 04:12:44.098077    6272 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
	Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
	Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
	Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
	Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
	Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
	Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:46:37 functional-094900 dockerd[4431]: time="2024-06-24T10:46:37.586824113Z" level=info msg="Starting up"
	Jun 24 10:47:37 functional-094900 dockerd[4431]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:47:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jun 24 10:47:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:47:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:47:37 functional-094900 dockerd[4653]: time="2024-06-24T10:47:37.862696025Z" level=info msg="Starting up"
	Jun 24 10:48:37 functional-094900 dockerd[4653]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:48:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
	Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jun 24 10:49:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:49:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:49:38 functional-094900 dockerd[5189]: time="2024-06-24T10:49:38.371622809Z" level=info msg="Starting up"
	Jun 24 10:50:38 functional-094900 dockerd[5189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:50:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jun 24 10:50:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:50:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:50:38 functional-094900 dockerd[5414]: time="2024-06-24T10:50:38.614330084Z" level=info msg="Starting up"
	Jun 24 10:51:38 functional-094900 dockerd[5414]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:51:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jun 24 10:51:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:51:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:51:38 functional-094900 dockerd[5673]: time="2024-06-24T10:51:38.883496088Z" level=info msg="Starting up"
	Jun 24 10:52:38 functional-094900 dockerd[5673]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:52:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jun 24 10:52:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:52:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:52:39 functional-094900 dockerd[5883]: time="2024-06-24T10:52:39.154000751Z" level=info msg="Starting up"
	Jun 24 10:53:39 functional-094900 dockerd[5883]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:53:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jun 24 10:53:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:53:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:53:39 functional-094900 dockerd[6106]: time="2024-06-24T10:53:39.378634263Z" level=info msg="Starting up"
	Jun 24 10:54:39 functional-094900 dockerd[6106]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:54:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jun 24 10:54:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:54:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:54:39 functional-094900 dockerd[6322]: time="2024-06-24T10:54:39.640816472Z" level=info msg="Starting up"
	Jun 24 10:55:39 functional-094900 dockerd[6322]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:55:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jun 24 10:55:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:55:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:55:39 functional-094900 dockerd[6552]: time="2024-06-24T10:55:39.883655191Z" level=info msg="Starting up"
	Jun 24 10:56:39 functional-094900 dockerd[6552]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:56:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jun 24 10:56:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:56:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:56:40 functional-094900 dockerd[6764]: time="2024-06-24T10:56:40.369690362Z" level=info msg="Starting up"
	Jun 24 10:57:40 functional-094900 dockerd[6764]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:57:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jun 24 10:57:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:57:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:57:40 functional-094900 dockerd[7001]: time="2024-06-24T10:57:40.640830194Z" level=info msg="Starting up"
	Jun 24 10:58:40 functional-094900 dockerd[7001]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:58:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jun 24 10:58:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:58:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:58:40 functional-094900 dockerd[7244]: time="2024-06-24T10:58:40.902491856Z" level=info msg="Starting up"
	Jun 24 10:59:40 functional-094900 dockerd[7244]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 10:59:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jun 24 10:59:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 10:59:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 10:59:41 functional-094900 dockerd[7488]: time="2024-06-24T10:59:41.167040582Z" level=info msg="Starting up"
	Jun 24 11:00:41 functional-094900 dockerd[7488]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:00:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jun 24 11:00:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:00:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:00:41 functional-094900 dockerd[7716]: time="2024-06-24T11:00:41.384363310Z" level=info msg="Starting up"
	Jun 24 11:01:41 functional-094900 dockerd[7716]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:01:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jun 24 11:01:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:01:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:01:41 functional-094900 dockerd[8019]: time="2024-06-24T11:01:41.637458699Z" level=info msg="Starting up"
	Jun 24 11:02:41 functional-094900 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:02:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jun 24 11:02:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:02:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:02:41 functional-094900 dockerd[8232]: time="2024-06-24T11:02:41.846453303Z" level=info msg="Starting up"
	Jun 24 11:03:41 functional-094900 dockerd[8232]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:03:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jun 24 11:03:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:03:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:03:42 functional-094900 dockerd[8446]: time="2024-06-24T11:03:42.087902952Z" level=info msg="Starting up"
	Jun 24 11:04:42 functional-094900 dockerd[8446]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
	Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jun 24 11:05:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:05:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:05:42 functional-094900 dockerd[8994]: time="2024-06-24T11:05:42.587871779Z" level=info msg="Starting up"
	Jun 24 11:06:42 functional-094900 dockerd[8994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:06:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jun 24 11:06:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:06:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:06:42 functional-094900 dockerd[9200]: time="2024-06-24T11:06:42.851146986Z" level=info msg="Starting up"
	Jun 24 11:07:42 functional-094900 dockerd[9200]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
	Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jun 24 11:08:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:08:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:08:43 functional-094900 dockerd[9748]: time="2024-06-24T11:08:43.371382553Z" level=info msg="Starting up"
	Jun 24 11:09:43 functional-094900 dockerd[9748]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:09:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jun 24 11:09:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:09:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:09:43 functional-094900 dockerd[9964]: time="2024-06-24T11:09:43.621132733Z" level=info msg="Starting up"
	Jun 24 11:10:43 functional-094900 dockerd[9964]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:10:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jun 24 11:10:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:10:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:10:43 functional-094900 dockerd[10356]: time="2024-06-24T11:10:43.885023688Z" level=info msg="Starting up"
	Jun 24 11:11:16 functional-094900 dockerd[10356]: time="2024-06-24T11:11:16.110406215Z" level=info msg="Processing signal 'terminated'"
	Jun 24 11:11:43 functional-094900 dockerd[10356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:11:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 11:11:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 11:11:43 functional-094900 dockerd[10783]: time="2024-06-24T11:11:43.985181384Z" level=info msg="Starting up"
	Jun 24 11:12:44 functional-094900 dockerd[10783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 11:12:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0624 04:12:44.099441    6272 out.go:239] * 
	W0624 04:12:44.101667    6272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0624 04:12:44.115005    6272 out.go:177] 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:17:57.349257   12948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 04:18:45.658802   12948 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:18:45.689075   12948 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:18:45.717755   12948 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:18:45.746472   12948 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0624 04:18:45.781621   12948 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
functional_test.go:1234: out/minikube-windows-amd64.exe -p functional-094900 logs failed: exit status 1
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| start   | --download-only -p                                                                          | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
|         | binary-mirror-877500                                                                        |                      |                   |         |                     |                     |
|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
|         | http://127.0.0.1:61584                                                                      |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| delete  | -p binary-mirror-877500                                                                     | binary-mirror-877500 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:21 PDT |
| addons  | disable dashboard -p                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| addons  | enable dashboard -p                                                                         | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT |                     |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| start   | -p addons-517800 --wait=true                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:21 PDT | 24 Jun 24 03:28 PDT |
|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
| addons  | enable headlamp                                                                             | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable nvidia-device-plugin                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
|         | -p addons-517800                                                                            |                      |                   |         |                     |                     |
| ip      | addons-517800 ip                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:28 PDT | 24 Jun 24 03:28 PDT |
|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| ssh     | addons-517800 ssh cat                                                                       | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
|         | /opt/local-path-provisioner/pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5_default_test-pvc/file1 |                      |                   |         |                     |                     |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:30 PDT |
|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable inspektor-gadget -p                                                                 | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:29 PDT | 24 Jun 24 03:29 PDT |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | disable cloud-spanner -p                                                                    | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| ssh     | addons-517800 ssh curl -s                                                                   | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
| addons  | addons-517800 addons                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ip      | addons-517800 ip                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:30 PDT |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:30 PDT | 24 Jun 24 03:31 PDT |
|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:31 PDT |
|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| addons  | addons-517800 addons disable                                                                | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:31 PDT | 24 Jun 24 03:32 PDT |
|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| stop    | -p addons-517800                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
| addons  | enable dashboard -p                                                                         | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:32 PDT |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| addons  | disable gvisor -p                                                                           | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:32 PDT | 24 Jun 24 03:33 PDT |
|         | addons-517800                                                                               |                      |                   |         |                     |                     |
| delete  | -p addons-517800                                                                            | addons-517800        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:33 PDT |
| start   | -p nospam-998200 -n=1 --memory=2250 --wait=false                                            | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:33 PDT | 24 Jun 24 03:36 PDT |
|         | --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                       |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:36 PDT |                     |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT |                     |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:37 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:37 PDT | 24 Jun 24 03:38 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:38 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:38 PDT | 24 Jun 24 03:39 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-998200 --log_dir                                                                     | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| delete  | -p nospam-998200                                                                            | nospam-998200        | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:39 PDT |
| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:39 PDT | 24 Jun 24 03:43 PDT |
|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:43 PDT |                     |
|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:50 PDT | 24 Jun 24 03:52 PDT |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:52 PDT | 24 Jun 24 03:54 PDT |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:54 PDT | 24 Jun 24 03:56 PDT |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-094900 cache add                                                                 | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:56 PDT | 24 Jun 24 03:57 PDT |
|         | minikube-local-cache-test:functional-094900                                                 |                      |                   |         |                     |                     |
| cache   | functional-094900 cache delete                                                              | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
|         | minikube-local-cache-test:functional-094900                                                 |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | list                                                                                        | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT | 24 Jun 24 03:57 PDT |
| ssh     | functional-094900 ssh sudo                                                                  | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
|         | crictl images                                                                               |                      |                   |         |                     |                     |
| ssh     | functional-094900                                                                           | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:57 PDT |                     |
|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| ssh     | functional-094900 ssh                                                                       | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-094900 cache reload                                                              | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:58 PDT | 24 Jun 24 04:00 PDT |
| ssh     | functional-094900 ssh                                                                       | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| kubectl | functional-094900 kubectl --                                                                | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
|         | --context functional-094900                                                                 |                      |                   |         |                     |                     |
|         | get pods                                                                                    |                      |                   |         |                     |                     |
| start   | -p functional-094900                                                                        | functional-094900    | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:09 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/24 04:09:56
Running on machine: minikube1
Binary: Built with gc go1.22.4 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0624 04:09:56.169374    6272 out.go:291] Setting OutFile to fd 776 ...
I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 04:09:56.169374    6272 out.go:304] Setting ErrFile to fd 1000...
I0624 04:09:56.169374    6272 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0624 04:09:56.192082    6272 out.go:298] Setting JSON to false
I0624 04:09:56.194083    6272 start.go:129] hostinfo: {"hostname":"minikube1","uptime":17851,"bootTime":1719209544,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
W0624 04:09:56.194083    6272 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0624 04:09:56.199085    6272 out.go:177] * [functional-094900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
I0624 04:09:56.201449    6272 notify.go:220] Checking for updates...
I0624 04:09:56.201449    6272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0624 04:09:56.204617    6272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0624 04:09:56.207687    6272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
I0624 04:09:56.209721    6272 out.go:177]   - MINIKUBE_LOCATION=19124
I0624 04:09:56.212428    6272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0624 04:09:56.216786    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 04:09:56.216786    6272 driver.go:392] Setting default libvirt URI to qemu:///system
I0624 04:10:01.520141    6272 out.go:177] * Using the hyperv driver based on existing profile
I0624 04:10:01.523654    6272 start.go:297] selected driver: hyperv
I0624 04:10:01.523654    6272 start.go:901] validating driver "hyperv" against &{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 04:10:01.523654    6272 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0624 04:10:01.574064    6272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0624 04:10:01.574064    6272 cni.go:84] Creating CNI manager for ""
I0624 04:10:01.574064    6272 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0624 04:10:01.574643    6272 start.go:340] cluster config:
{Name:functional-094900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-094900 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.208.115 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0624 04:10:01.574802    6272 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0624 04:10:01.580373    6272 out.go:177] * Starting "functional-094900" primary control-plane node in "functional-094900" cluster
I0624 04:10:01.582564    6272 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0624 04:10:01.582564    6272 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0624 04:10:01.582564    6272 cache.go:56] Caching tarball of preloaded images
I0624 04:10:01.582564    6272 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0624 04:10:01.582564    6272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0624 04:10:01.582564    6272 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-094900\config.json ...
I0624 04:10:01.584620    6272 start.go:360] acquireMachinesLock for functional-094900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0624 04:10:01.584620    6272 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-094900"
I0624 04:10:01.585549    6272 start.go:96] Skipping create...Using existing machine configuration
I0624 04:10:01.585549    6272 fix.go:54] fixHost starting: 
I0624 04:10:01.585549    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:04.327490    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:04.327490    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:04.327490    6272 fix.go:112] recreateIfNeeded on functional-094900: state=Running err=<nil>
W0624 04:10:04.327490    6272 fix.go:138] unexpected machine state, will restart: <nil>
I0624 04:10:04.330864    6272 out.go:177] * Updating the running hyperv "functional-094900" VM ...
I0624 04:10:04.334348    6272 machine.go:94] provisionDockerMachine start ...
I0624 04:10:04.334348    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:06.500727    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:06.500727    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:06.500727    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:09.049416    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:09.049640    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:09.055373    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:09.056075    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:09.056075    6272 main.go:141] libmachine: About to run SSH command:
hostname
I0624 04:10:09.184519    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900

                                                
                                                
I0624 04:10:09.184704    6272 buildroot.go:166] provisioning hostname "functional-094900"
I0624 04:10:09.184799    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:11.277821    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:11.278790    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:11.278790    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:13.814522    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:13.814522    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:13.819687    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:13.820422    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:13.820422    6272 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-094900 && echo "functional-094900" | sudo tee /etc/hostname
I0624 04:10:13.989233    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-094900

                                                
                                                
I0624 04:10:13.989368    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:16.156521    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:16.156521    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:16.156521    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:18.756341    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:18.756341    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:18.763676    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:18.763776    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:18.763776    6272 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sfunctional-094900' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-094900/g' /etc/hosts;
			else 
				echo '127.0.1.1 functional-094900' | sudo tee -a /etc/hosts; 
			fi
		fi
I0624 04:10:18.905084    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0624 04:10:18.905084    6272 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0624 04:10:18.905084    6272 buildroot.go:174] setting up certificates
I0624 04:10:18.905084    6272 provision.go:84] configureAuth start
I0624 04:10:18.905084    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:21.059954    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:21.059954    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:21.059954    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:23.658050    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:23.658050    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:23.658272    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:25.793188    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:25.793188    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:25.793188    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:28.332135    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:28.332135    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:28.332135    6272 provision.go:143] copyHostCerts
I0624 04:10:28.332962    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0624 04:10:28.332962    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0624 04:10:28.333499    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
I0624 04:10:28.334533    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0624 04:10:28.334533    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0624 04:10:28.334533    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0624 04:10:28.335905    6272 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0624 04:10:28.335905    6272 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0624 04:10:28.336542    6272 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0624 04:10:28.337629    6272 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-094900 san=[127.0.0.1 172.31.208.115 functional-094900 localhost minikube]
I0624 04:10:28.909857    6272 provision.go:177] copyRemoteCerts
I0624 04:10:28.919860    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0624 04:10:28.919860    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:31.039795    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:31.039795    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:31.039795    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:33.573514    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:33.573514    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:33.573811    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
I0624 04:10:33.690262    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7703173s)
I0624 04:10:33.690859    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
I0624 04:10:33.738657    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0624 04:10:33.785623    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0624 04:10:33.832878    6272 provision.go:87] duration metric: took 14.9277378s to configureAuth
I0624 04:10:33.832878    6272 buildroot.go:189] setting minikube options for container-runtime
I0624 04:10:33.833584    6272 config.go:182] Loaded profile config "functional-094900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0624 04:10:33.833584    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:35.961242    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:35.962274    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:35.962335    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:38.447273    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:38.447542    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:38.453995    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:38.454680    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:38.454680    6272 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0624 04:10:38.586052    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0624 04:10:38.586052    6272 buildroot.go:70] root file system type: tmpfs
I0624 04:10:38.586603    6272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0624 04:10:38.586603    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:40.661150    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:40.662074    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:40.662133    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:43.184441    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:43.185079    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:43.191645    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:43.191810    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:43.191810    6272 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0624 04:10:43.345253    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0624 04:10:43.345452    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:45.417488    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:45.417488    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:45.417488    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:48.024920    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:48.024975    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:48.031261    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:48.031261    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:48.031261    6272 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0624 04:10:48.188214    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0624 04:10:48.188214    6272 machine.go:97] duration metric: took 43.8537018s to provisionDockerMachine
I0624 04:10:48.188214    6272 start.go:293] postStartSetup for "functional-094900" (driver="hyperv")
I0624 04:10:48.188214    6272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0624 04:10:48.202185    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0624 04:10:48.202185    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:50.300292    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:50.300292    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:50.300292    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:52.814556    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:52.814556    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:52.814932    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
I0624 04:10:52.931376    6272 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.729112s)
I0624 04:10:52.942928    6272 ssh_runner.go:195] Run: cat /etc/os-release
I0624 04:10:52.949218    6272 info.go:137] Remote host: Buildroot 2023.02.9
I0624 04:10:52.949218    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
I0624 04:10:52.950127    6272 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
I0624 04:10:52.951430    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
I0624 04:10:52.952592    6272 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts -> hosts in /etc/test/nested/copy/944
I0624 04:10:52.962084    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/944
I0624 04:10:52.982953    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
I0624 04:10:53.027604    6272 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts --> /etc/test/nested/copy/944/hosts (40 bytes)
I0624 04:10:53.074856    6272 start.go:296] duration metric: took 4.8866228s for postStartSetup
I0624 04:10:53.074856    6272 fix.go:56] duration metric: took 51.4891134s for fixHost
I0624 04:10:53.074856    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:55.164375    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:55.164375    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:55.164624    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:10:57.696078    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:10:57.696078    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:57.701580    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:10:57.702374    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:10:57.702374    6272 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0624 04:10:57.840765    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719227457.843280300

                                                
                                                
I0624 04:10:57.840765    6272 fix.go:216] guest clock: 1719227457.843280300
I0624 04:10:57.840765    6272 fix.go:229] Guest: 2024-06-24 04:10:57.8432803 -0700 PDT Remote: 2024-06-24 04:10:53.0748563 -0700 PDT m=+56.992022601 (delta=4.768424s)
I0624 04:10:57.840765    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:10:59.988153    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:10:59.988153    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:10:59.988560    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:11:02.526188    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:11:02.526188    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:02.532676    6272 main.go:141] libmachine: Using SSH client type: native
I0624 04:11:02.532676    6272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.208.115 22 <nil> <nil>}
I0624 04:11:02.532676    6272 main.go:141] libmachine: About to run SSH command:
sudo date -s @1719227457
I0624 04:11:02.687106    6272 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:10:57 UTC 2024

                                                
                                                
I0624 04:11:02.687106    6272 fix.go:236] clock set: Mon Jun 24 11:10:57 UTC 2024
(err=<nil>)
I0624 04:11:02.687106    6272 start.go:83] releasing machines lock for "functional-094900", held for 1m1.1022557s
I0624 04:11:02.687652    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:11:04.752819    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:11:04.752819    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:04.752819    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:11:07.273819    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:11:07.273819    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:07.279101    6272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0624 04:11:07.279135    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:11:07.291913    6272 ssh_runner.go:195] Run: cat /version.json
I0624 04:11:07.291913    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-094900 ).state
I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:11:09.528377    6272 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0624 04:11:09.528377    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:09.528865    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:11:09.528865    6272 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-094900 ).networkadapters[0]).ipaddresses[0]
I0624 04:11:12.237155    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:11:12.237155    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:12.237571    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
I0624 04:11:12.260142    6272 main.go:141] libmachine: [stdout =====>] : 172.31.208.115

                                                
                                                
I0624 04:11:12.260142    6272 main.go:141] libmachine: [stderr =====>] : 
I0624 04:11:12.260890    6272 sshutil.go:53] new ssh client: &{IP:172.31.208.115 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-094900\id_rsa Username:docker}
I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: cat /version.json: (7.0503534s)
I0624 04:11:14.342293    6272 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.0631653s)
W0624 04:11:14.342293    6272 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
stdout:

                                                
                                                
stderr:
curl: (28) Resolving timed out after 2000 milliseconds
W0624 04:11:14.342293    6272 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
W0624 04:11:14.342293    6272 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0624 04:11:14.354665    6272 ssh_runner.go:195] Run: systemctl --version
I0624 04:11:14.376249    6272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0624 04:11:14.386363    6272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0624 04:11:14.397260    6272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0624 04:11:14.415590    6272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0624 04:11:14.415590    6272 start.go:494] detecting cgroup driver to use...
I0624 04:11:14.415832    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0624 04:11:14.464291    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0624 04:11:14.496544    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0624 04:11:14.516006    6272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0624 04:11:14.525959    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0624 04:11:14.557998    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0624 04:11:14.589894    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0624 04:11:14.622466    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0624 04:11:14.658749    6272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0624 04:11:14.690692    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0624 04:11:14.724824    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0624 04:11:14.754263    6272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0624 04:11:14.784168    6272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0624 04:11:14.813679    6272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0624 04:11:14.846037    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0624 04:11:15.061547    6272 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0624 04:11:15.095654    6272 start.go:494] detecting cgroup driver to use...
I0624 04:11:15.107262    6272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0624 04:11:15.141870    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0624 04:11:15.175611    6272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0624 04:11:15.219872    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0624 04:11:15.257821    6272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0624 04:11:15.281036    6272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0624 04:11:15.328376    6272 ssh_runner.go:195] Run: which cri-dockerd
I0624 04:11:15.347052    6272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0624 04:11:15.364821    6272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0624 04:11:15.412796    6272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0624 04:11:15.618728    6272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0624 04:11:15.819205    6272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0624 04:11:15.819413    6272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0624 04:11:15.864903    6272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0624 04:11:16.082704    6272 ssh_runner.go:195] Run: sudo systemctl restart docker
I0624 04:12:44.005774    6272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9226447s)
I0624 04:12:44.018618    6272 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0624 04:12:44.094779    6272 out.go:177] 
W0624 04:12:44.098077    6272 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jun 24 10:41:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.930014159Z" level=info msg="Starting up"
Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.931009876Z" level=info msg="containerd not running, starting managed containerd"
Jun 24 10:41:37 functional-094900 dockerd[668]: time="2024-06-24T10:41:37.932388137Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=674
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.966540336Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987779023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987873834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987929940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.987945542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988009650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988027652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988186870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988276281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988295783Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988305784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988426999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.988721133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991451053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991544464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991668578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991754788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991845799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.991986515Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 24 10:41:37 functional-094900 dockerd[674]: time="2024-06-24T10:41:37.992067125Z" level=info msg="metadata content store policy set" policy=shared
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017003230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017117543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017142346Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017164948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017179950Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017279461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017686905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017798118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017896228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017915131Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017928532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017940533Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017951434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017964836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017978637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.017990839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018002440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018014341Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018032043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018044945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018056246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018067847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018091350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018109952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018123153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018135155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018148156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018161758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018172659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018183160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018195861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018210663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018235166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018259768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018306874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018362880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018378081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018390683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018400284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018411785Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018424586Z" level=info msg="NRI interface is disabled by configuration."
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018616107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018752022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018802528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 24 10:41:38 functional-094900 dockerd[674]: time="2024-06-24T10:41:38.018857634Z" level=info msg="containerd successfully booted in 0.053778s"
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.004772432Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.032962734Z" level=info msg="Loading containers: start."
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.354634722Z" level=info msg="Loading containers: done."
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379777971Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.379973892Z" level=info msg="Daemon has completed initialization"
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.510820277Z" level=info msg="API listen on /var/run/docker.sock"
Jun 24 10:41:39 functional-094900 systemd[1]: Started Docker Application Container Engine.
Jun 24 10:41:39 functional-094900 dockerd[668]: time="2024-06-24T10:41:39.512210023Z" level=info msg="API listen on [::]:2376"
Jun 24 10:42:08 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.540229333Z" level=info msg="Processing signal 'terminated'"
Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.542730739Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543026240Z" level=info msg="Daemon shutdown complete"
Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543071640Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 24 10:42:08 functional-094900 dockerd[668]: time="2024-06-24T10:42:08.543089240Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 24 10:42:09 functional-094900 systemd[1]: docker.service: Deactivated successfully.
Jun 24 10:42:09 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:42:09 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.610600447Z" level=info msg="Starting up"
Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.611564949Z" level=info msg="containerd not running, starting managed containerd"
Jun 24 10:42:09 functional-094900 dockerd[1023]: time="2024-06-24T10:42:09.612437951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1029
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.644597130Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671822996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671907896Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671956796Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.671973197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672020897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672034997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672230497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672322297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672341697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672353897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672379697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.672539498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675705906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.675903206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676067807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676165907Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676229807Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676252107Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676326107Z" level=info msg="metadata content store policy set" policy=shared
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676487008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676551108Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676570708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676585408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676601108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676649108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.676885209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677012309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677109309Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677129909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677144509Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677158709Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677182809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677199209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677220409Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677241209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677255909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677309310Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677335810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677351310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677364210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677377510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677418810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677435810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677449010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677462410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677476410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677514010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677528910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677542110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677554410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677572210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677594310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677634410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677649710Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677692910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677710611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677723411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677737211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677752111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677765611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.677776611Z" level=info msg="NRI interface is disabled by configuration."
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678105211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678239112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678616313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 24 10:42:09 functional-094900 dockerd[1029]: time="2024-06-24T10:42:09.678752013Z" level=info msg="containerd successfully booted in 0.034919s"
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.653953994Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.674745945Z" level=info msg="Loading containers: start."
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.842091454Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.927946664Z" level=info msg="Loading containers: done."
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952301523Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.952371023Z" level=info msg="Daemon has completed initialization"
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.992876422Z" level=info msg="API listen on [::]:2376"
Jun 24 10:42:10 functional-094900 dockerd[1023]: time="2024-06-24T10:42:10.993007422Z" level=info msg="API listen on /var/run/docker.sock"
Jun 24 10:42:10 functional-094900 systemd[1]: Started Docker Application Container Engine.
Jun 24 10:42:20 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.525490900Z" level=info msg="Processing signal 'terminated'"
Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527676205Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527925306Z" level=info msg="Daemon shutdown complete"
Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527971206Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 24 10:42:20 functional-094900 dockerd[1023]: time="2024-06-24T10:42:20.527991306Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 24 10:42:21 functional-094900 systemd[1]: docker.service: Deactivated successfully.
Jun 24 10:42:21 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:42:21 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.599802423Z" level=info msg="Starting up"
Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.600740825Z" level=info msg="containerd not running, starting managed containerd"
Jun 24 10:42:21 functional-094900 dockerd[1330]: time="2024-06-24T10:42:21.601663528Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.633035604Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660886872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660941472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.660989773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661006173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661036973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661057273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661320873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661341373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661356473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661368573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661413274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.661556274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664227180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664256881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664452681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664489281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664515081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664534581Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664547281Z" level=info msg="metadata content store policy set" policy=shared
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664675382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664708882Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664726682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664744482Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664762582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.664828882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665135183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665197283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665215183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665243183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665259283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665372883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665394983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665413483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665443783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665470784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665505884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665523384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665562184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665592484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665620584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665637484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665656084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665672084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665688184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665703584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665719984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665739484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665754184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665769784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665784784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665804184Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665830284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665855984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665870084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665917185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665937285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665951185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665966485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665978685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.665994185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666006285Z" level=info msg="NRI interface is disabled by configuration."
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666329086Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666437986Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666519286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 24 10:42:21 functional-094900 dockerd[1336]: time="2024-06-24T10:42:21.666563086Z" level=info msg="containerd successfully booted in 0.034602s"
Jun 24 10:42:22 functional-094900 dockerd[1330]: time="2024-06-24T10:42:22.953864630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.757620176Z" level=info msg="Loading containers: start."
Jun 24 10:42:25 functional-094900 dockerd[1330]: time="2024-06-24T10:42:25.930535198Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.012848699Z" level=info msg="Loading containers: done."
Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037489860Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.037613260Z" level=info msg="Daemon has completed initialization"
Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.090968090Z" level=info msg="API listen on /var/run/docker.sock"
Jun 24 10:42:26 functional-094900 dockerd[1330]: time="2024-06-24T10:42:26.091098290Z" level=info msg="API listen on [::]:2376"
Jun 24 10:42:26 functional-094900 systemd[1]: Started Docker Application Container Engine.
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212428298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.212984221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213027215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.213377966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.314422081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315231269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.315249566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.316257427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.374853016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375153375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.375447134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.376040852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403498352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403585140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.403599238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.405402488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577452174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577618251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.577979101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.578701601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847492598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847557788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847637277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.847791256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.921942593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922478719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.922805773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.926147011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.935962252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936305205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.936549171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:34 functional-094900 dockerd[1336]: time="2024-06-24T10:42:34.939423473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.862022449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863052614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863314505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:55 functional-094900 dockerd[1336]: time="2024-06-24T10:42:55.863577296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.214745587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215100875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215171173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:56 functional-094900 dockerd[1336]: time="2024-06-24T10:42:56.215462364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.542089188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543746639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.543915934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:57 functional-094900 dockerd[1336]: time="2024-06-24T10:42:57.544332422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040422152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040563585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040603095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:42:58 functional-094900 dockerd[1336]: time="2024-06-24T10:42:58.040894864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056579179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056731910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.056857235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.057087481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339569671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339778213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.339822922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:43:04 functional-094900 dockerd[1336]: time="2024-06-24T10:43:04.340005059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 24 10:44:25 functional-094900 systemd[1]: Stopping Docker Application Container Engine...
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.625003224Z" level=info msg="Processing signal 'terminated'"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831125254Z" level=info msg="ignoring event" container=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.831746763Z" level=info msg="ignoring event" container=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.832422173Z" level=info msg="shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833128784Z" level=warning msg="cleaning up after shim disconnected" id=94a0f1461159507db91d58cf08032f4882324469bbfafc3fcc3aa08ee08933a8 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833318386Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833522589Z" level=info msg="shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.833960896Z" level=warning msg="cleaning up after shim disconnected" id=f26557e8864a4d99306308665216ab81ff6b87d8da9b6aac7fb9059c72ea9878 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.834099798Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.844779955Z" level=info msg="ignoring event" container=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845213261Z" level=info msg="shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845334163Z" level=warning msg="cleaning up after shim disconnected" id=2a2d1f31bf0f3d4ea05b5b8bbae27e0a566dca5616dabf66eac343de6f594f4a namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.845419664Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848645412Z" level=info msg="shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848691512Z" level=warning msg="cleaning up after shim disconnected" id=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.848701113Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.849782229Z" level=info msg="ignoring event" container=7fa3ca7e9a5ec4496d1d1a2119410e89fb3577da3cfe07cff21c4782bf7cdce6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880037573Z" level=info msg="shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880210676Z" level=warning msg="cleaning up after shim disconnected" id=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.880308077Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.884535139Z" level=info msg="ignoring event" container=478d0682f5a4056688b3e9e9d987a34d851d3f5f3f483b9aa6031f06a4f648f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891809446Z" level=info msg="shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891933348Z" level=warning msg="cleaning up after shim disconnected" id=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.891979349Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.910922827Z" level=info msg="shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911286733Z" level=warning msg="cleaning up after shim disconnected" id=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.911585537Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917417823Z" level=info msg="ignoring event" container=c04c8722be31daccaf5a801423d4b439216a4be2c9e78fa9227767656779d901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917671527Z" level=info msg="ignoring event" container=70629be1a38e6692df15f29406a88b2b7feea34dcc45b8b3160b34a35df37f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.917869130Z" level=info msg="ignoring event" container=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.918019532Z" level=info msg="ignoring event" container=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.924932933Z" level=info msg="shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925232338Z" level=warning msg="cleaning up after shim disconnected" id=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.925304539Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932132139Z" level=info msg="ignoring event" container=8888eb31bb40d8cf53ae8ebbc6c2a208559bdd6d3f77c0bf7ce0d31223e248e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.932177640Z" level=info msg="ignoring event" container=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936421102Z" level=info msg="shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936517604Z" level=warning msg="cleaning up after shim disconnected" id=42dfe2bbe891b79d0862a0e2027b2a72b4ff23347c79ba6e0a84ee8fa0c274e0 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.936529504Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938410332Z" level=info msg="shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938453032Z" level=warning msg="cleaning up after shim disconnected" id=f431da6b7ed32b1bce325369f4b2c639275f1813a355907c60d88751585a7842 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.938464032Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943234903Z" level=info msg="shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943488506Z" level=warning msg="cleaning up after shim disconnected" id=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943547107Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.943736210Z" level=info msg="shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945469335Z" level=warning msg="cleaning up after shim disconnected" id=7dc755218cbdf3e39f58174ed51d3a7271084a20b7f21caa711cc5d40eb5ca32 namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.945582937Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:25 functional-094900 dockerd[1330]: time="2024-06-24T10:44:25.945987943Z" level=info msg="ignoring event" container=6d858bb1bb1dd7bff772cb95698367289a6f6668e8411513dcea681593d8c20e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:25 functional-094900 dockerd[1336]: time="2024-06-24T10:44:25.988713471Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Jun 24 10:44:26 functional-094900 dockerd[1336]: time="2024-06-24T10:44:26.043027470Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745348407Z" level=info msg="shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745431808Z" level=warning msg="cleaning up after shim disconnected" id=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 namespace=moby
Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.745446708Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:30 functional-094900 dockerd[1330]: time="2024-06-24T10:44:30.745739613Z" level=info msg="ignoring event" container=4cb4291260311bca45a531501d70033d741acad1ee2e2e1a8277d69771593d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:30 functional-094900 dockerd[1336]: time="2024-06-24T10:44:30.779014502Z" level=warning msg="cleanup warnings time=\"2024-06-24T10:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.752476493Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed
Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808266881Z" level=info msg="shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808784278Z" level=warning msg="cleaning up after shim disconnected" id=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed namespace=moby
Jun 24 10:44:35 functional-094900 dockerd[1336]: time="2024-06-24T10:44:35.808900677Z" level=info msg="cleaning up dead shim" namespace=moby
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.807718685Z" level=info msg="ignoring event" container=eeddd367d6b5947a38ccb7189826030d0ad2653b5626b34760dcf0e5994f07ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.875600385Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876528478Z" level=info msg="Daemon shutdown complete"
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876760076Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 24 10:44:35 functional-094900 dockerd[1330]: time="2024-06-24T10:44:35.876767176Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Deactivated successfully.
Jun 24 10:44:36 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:44:36 functional-094900 systemd[1]: docker.service: Consumed 4.707s CPU time.
Jun 24 10:44:36 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:44:36 functional-094900 dockerd[3916]: time="2024-06-24T10:44:36.949903071Z" level=info msg="Starting up"
Jun 24 10:45:36 functional-094900 dockerd[3916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:45:36 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:45:36 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:45:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Jun 24 10:45:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:45:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:45:37 functional-094900 dockerd[4118]: time="2024-06-24T10:45:37.214264273Z" level=info msg="Starting up"
Jun 24 10:46:37 functional-094900 dockerd[4118]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:46:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:46:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Jun 24 10:46:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:46:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:46:37 functional-094900 dockerd[4431]: time="2024-06-24T10:46:37.586824113Z" level=info msg="Starting up"
Jun 24 10:47:37 functional-094900 dockerd[4431]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:47:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:47:37 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jun 24 10:47:37 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:47:37 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:47:37 functional-094900 dockerd[4653]: time="2024-06-24T10:47:37.862696025Z" level=info msg="Starting up"
Jun 24 10:48:37 functional-094900 dockerd[4653]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:48:37 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:48:37 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:48:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Jun 24 10:48:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:48:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:48:38 functional-094900 dockerd[4976]: time="2024-06-24T10:48:38.140381595Z" level=info msg="Starting up"
Jun 24 10:49:38 functional-094900 dockerd[4976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:49:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:49:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
Jun 24 10:49:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:49:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:49:38 functional-094900 dockerd[5189]: time="2024-06-24T10:49:38.371622809Z" level=info msg="Starting up"
Jun 24 10:50:38 functional-094900 dockerd[5189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:50:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:50:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
Jun 24 10:50:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:50:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:50:38 functional-094900 dockerd[5414]: time="2024-06-24T10:50:38.614330084Z" level=info msg="Starting up"
Jun 24 10:51:38 functional-094900 dockerd[5414]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:51:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:51:38 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
Jun 24 10:51:38 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:51:38 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:51:38 functional-094900 dockerd[5673]: time="2024-06-24T10:51:38.883496088Z" level=info msg="Starting up"
Jun 24 10:52:38 functional-094900 dockerd[5673]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:52:38 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:52:38 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:52:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
Jun 24 10:52:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:52:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:52:39 functional-094900 dockerd[5883]: time="2024-06-24T10:52:39.154000751Z" level=info msg="Starting up"
Jun 24 10:53:39 functional-094900 dockerd[5883]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:53:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:53:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
Jun 24 10:53:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:53:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:53:39 functional-094900 dockerd[6106]: time="2024-06-24T10:53:39.378634263Z" level=info msg="Starting up"
Jun 24 10:54:39 functional-094900 dockerd[6106]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:54:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:54:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
Jun 24 10:54:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:54:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:54:39 functional-094900 dockerd[6322]: time="2024-06-24T10:54:39.640816472Z" level=info msg="Starting up"
Jun 24 10:55:39 functional-094900 dockerd[6322]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:55:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:55:39 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
Jun 24 10:55:39 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:55:39 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:55:39 functional-094900 dockerd[6552]: time="2024-06-24T10:55:39.883655191Z" level=info msg="Starting up"
Jun 24 10:56:39 functional-094900 dockerd[6552]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:56:39 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:56:39 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:56:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
Jun 24 10:56:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:56:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:56:40 functional-094900 dockerd[6764]: time="2024-06-24T10:56:40.369690362Z" level=info msg="Starting up"
Jun 24 10:57:40 functional-094900 dockerd[6764]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:57:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:57:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
Jun 24 10:57:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:57:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:57:40 functional-094900 dockerd[7001]: time="2024-06-24T10:57:40.640830194Z" level=info msg="Starting up"
Jun 24 10:58:40 functional-094900 dockerd[7001]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:58:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:58:40 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
Jun 24 10:58:40 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:58:40 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:58:40 functional-094900 dockerd[7244]: time="2024-06-24T10:58:40.902491856Z" level=info msg="Starting up"
Jun 24 10:59:40 functional-094900 dockerd[7244]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 10:59:40 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 10:59:40 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 10:59:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
Jun 24 10:59:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 10:59:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 10:59:41 functional-094900 dockerd[7488]: time="2024-06-24T10:59:41.167040582Z" level=info msg="Starting up"
Jun 24 11:00:41 functional-094900 dockerd[7488]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:00:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:00:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
Jun 24 11:00:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:00:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:00:41 functional-094900 dockerd[7716]: time="2024-06-24T11:00:41.384363310Z" level=info msg="Starting up"
Jun 24 11:01:41 functional-094900 dockerd[7716]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:01:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:01:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
Jun 24 11:01:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:01:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:01:41 functional-094900 dockerd[8019]: time="2024-06-24T11:01:41.637458699Z" level=info msg="Starting up"
Jun 24 11:02:41 functional-094900 dockerd[8019]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:02:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:02:41 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
Jun 24 11:02:41 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:02:41 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:02:41 functional-094900 dockerd[8232]: time="2024-06-24T11:02:41.846453303Z" level=info msg="Starting up"
Jun 24 11:03:41 functional-094900 dockerd[8232]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:03:41 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:03:41 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:03:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
Jun 24 11:03:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:03:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:03:42 functional-094900 dockerd[8446]: time="2024-06-24T11:03:42.087902952Z" level=info msg="Starting up"
Jun 24 11:04:42 functional-094900 dockerd[8446]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:04:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:04:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
Jun 24 11:04:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:04:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:04:42 functional-094900 dockerd[8775]: time="2024-06-24T11:04:42.386415056Z" level=info msg="Starting up"
Jun 24 11:05:42 functional-094900 dockerd[8775]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:05:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:05:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
Jun 24 11:05:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:05:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:05:42 functional-094900 dockerd[8994]: time="2024-06-24T11:05:42.587871779Z" level=info msg="Starting up"
Jun 24 11:06:42 functional-094900 dockerd[8994]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:06:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:06:42 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
Jun 24 11:06:42 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:06:42 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:06:42 functional-094900 dockerd[9200]: time="2024-06-24T11:06:42.851146986Z" level=info msg="Starting up"
Jun 24 11:07:42 functional-094900 dockerd[9200]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:07:42 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:07:42 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:07:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
Jun 24 11:07:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:07:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:07:43 functional-094900 dockerd[9525]: time="2024-06-24T11:07:43.124389511Z" level=info msg="Starting up"
Jun 24 11:08:43 functional-094900 dockerd[9525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:08:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:08:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
Jun 24 11:08:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:08:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:08:43 functional-094900 dockerd[9748]: time="2024-06-24T11:08:43.371382553Z" level=info msg="Starting up"
Jun 24 11:09:43 functional-094900 dockerd[9748]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:09:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:09:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
Jun 24 11:09:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:09:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:09:43 functional-094900 dockerd[9964]: time="2024-06-24T11:09:43.621132733Z" level=info msg="Starting up"
Jun 24 11:10:43 functional-094900 dockerd[9964]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:10:43 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.
Jun 24 11:10:43 functional-094900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
Jun 24 11:10:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:10:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:10:43 functional-094900 dockerd[10356]: time="2024-06-24T11:10:43.885023688Z" level=info msg="Starting up"
Jun 24 11:11:16 functional-094900 dockerd[10356]: time="2024-06-24T11:11:16.110406215Z" level=info msg="Processing signal 'terminated'"
Jun 24 11:11:43 functional-094900 dockerd[10356]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:11:43 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:11:43 functional-094900 systemd[1]: Stopped Docker Application Container Engine.
Jun 24 11:11:43 functional-094900 systemd[1]: Starting Docker Application Container Engine...
Jun 24 11:11:43 functional-094900 dockerd[10783]: time="2024-06-24T11:11:43.985181384Z" level=info msg="Starting up"
Jun 24 11:12:44 functional-094900 dockerd[10783]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 24 11:12:44 functional-094900 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 24 11:12:44 functional-094900 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0624 04:12:44.099441    6272 out.go:239] * 
W0624 04:12:44.101667    6272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0624 04:12:44.115005    6272 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (107.00s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:168: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- sh -c "ping -c 1 172.31.208.1"
E0624 04:33:21.860565     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4462004s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:33:18.400742   10140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-fc5497c4f-lsn8j): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- sh -c "ping -c 1 172.31.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4422683s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:33:29.303438    7156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-fc5497c4f-mg7l6): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- sh -c "ping -c 1 172.31.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4324111s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:33:40.190357    5348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.31.208.1) from pod (busybox-fc5497c4f-rrqj8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-340000 -n ha-340000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-340000 -n ha-340000: (12.7084351s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 logs -n 25: (9.0421709s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | delete                                                                   | minikube          | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:00 PDT | 24 Jun 24 04:00 PDT |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-094900 kubectl --                                             | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:03 PDT |                     |
	|         | --context functional-094900                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:09 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| delete  | -p functional-094900                                                     | functional-094900 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:19 PDT | 24 Jun 24 04:21 PDT |
	| start   | -p ha-340000 --wait=true                                                 | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:21 PDT | 24 Jun 24 04:32 PDT |
	|         | --memory=2200 --ha                                                       |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr                                                   |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- apply -f                                                 | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- rollout status                                           | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | deployment/busybox                                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- get pods -o                                              | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'                                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- get pods -o                                              | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-lsn8j --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-mg7l6 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-rrqj8 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-lsn8j --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-mg7l6 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-rrqj8 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-lsn8j -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-mg7l6 -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-rrqj8 -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- get pods -o                                              | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-lsn8j                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT |                     |
	|         | busybox-fc5497c4f-lsn8j -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                                                |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-mg7l6                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT |                     |
	|         | busybox-fc5497c4f-mg7l6 -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                                                |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT | 24 Jun 24 04:33 PDT |
	|         | busybox-fc5497c4f-rrqj8                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-340000 -- exec                                                     | ha-340000         | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:33 PDT |                     |
	|         | busybox-fc5497c4f-rrqj8 -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                                                |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 04:21:04
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 04:21:04.440454    7764 out.go:291] Setting OutFile to fd 372 ...
	I0624 04:21:04.441412    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:21:04.441412    7764 out.go:304] Setting ErrFile to fd 792...
	I0624 04:21:04.441614    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:21:04.468985    7764 out.go:298] Setting JSON to false
	I0624 04:21:04.471719    7764 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18519,"bootTime":1719209544,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 04:21:04.472731    7764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 04:21:04.480371    7764 out.go:177] * [ha-340000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 04:21:04.484324    7764 notify.go:220] Checking for updates...
	I0624 04:21:04.486941    7764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:21:04.489306    7764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 04:21:04.491459    7764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 04:21:04.493396    7764 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 04:21:04.497092    7764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 04:21:04.500940    7764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 04:21:09.916557    7764 out.go:177] * Using the hyperv driver based on user configuration
	I0624 04:21:09.920604    7764 start.go:297] selected driver: hyperv
	I0624 04:21:09.920773    7764 start.go:901] validating driver "hyperv" against <nil>
	I0624 04:21:09.920773    7764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 04:21:09.969689    7764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 04:21:09.971001    7764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:21:09.971001    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:21:09.971001    7764 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0624 04:21:09.971001    7764 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 04:21:09.971001    7764 start.go:340] cluster config:
	{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:21:09.971584    7764 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 04:21:09.976713    7764 out.go:177] * Starting "ha-340000" primary control-plane node in "ha-340000" cluster
	I0624 04:21:09.982129    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:21:09.982369    7764 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 04:21:09.982467    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:21:09.982805    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:21:09.982805    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:21:09.983385    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:21:09.983385    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json: {Name:mk5bcae1e9566ffb94b611ccf4e4863330a7bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:21:09.984755    7764 start.go:360] acquireMachinesLock for ha-340000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:21:09.984755    7764 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-340000"
	I0624 04:21:09.984755    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:21:09.984755    7764 start.go:125] createHost starting for "" (driver="hyperv")
	I0624 04:21:09.988253    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:21:09.989141    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:21:09.989141    7764 client.go:168] LocalClient.Create starting
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:21:09.990193    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:21:09.990392    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:21:09.990392    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:21:09.990580    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:21:12.033672    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:21:12.033672    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:12.033803    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:21:13.712611    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:21:13.712819    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:13.712819    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:21:15.173895    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:21:15.174941    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:15.175134    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:21:18.728972    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:21:18.729242    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:18.731483    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:21:19.245672    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:21:19.686943    7764 main.go:141] libmachine: Creating VM...
	I0624 04:21:19.686943    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:22.569717    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:21:24.339286    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:21:24.339286    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:24.339568    7764 main.go:141] libmachine: Creating VHD
	I0624 04:21:24.339568    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:21:28.188000    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6FA1222A-B8A3-4B00-8259-E96C762FA31D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:21:28.188088    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:28.188088    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:21:28.188172    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:21:28.196886    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:21:31.381741    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:31.381741    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:31.382614    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd' -SizeBytes 20000MB
	I0624 04:21:33.876048    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:33.876048    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:33.876472    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:21:37.492970    7764 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-340000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:21:37.492970    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:37.493944    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000 -DynamicMemoryEnabled $false
	I0624 04:21:39.733419    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:39.734365    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:39.734365    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000 -Count 2
	I0624 04:21:41.926777    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:41.926983    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:41.927071    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\boot2docker.iso'
	I0624 04:21:44.518588    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:44.518588    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:44.518811    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd'
	I0624 04:21:47.186210    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:47.186413    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:47.186413    7764 main.go:141] libmachine: Starting VM...
	I0624 04:21:47.186525    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000
	I0624 04:21:50.225145    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:50.225145    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:50.225330    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:21:50.225368    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:21:55.042169    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:55.042617    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:56.056313    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:21:58.291501    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:21:58.291501    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:58.291927    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:00.837856    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:00.837856    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:01.851924    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:04.099467    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:04.099467    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:04.099836    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:06.625186    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:06.625498    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:07.631560    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:09.812566    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:09.812640    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:09.812834    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:12.338212    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:12.339008    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:13.345133    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:20.340082    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:20.340775    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:20.340775    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:22:20.340938    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:22.519783    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:22.519993    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:22.519993    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:25.084843    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:25.084843    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:25.091631    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:25.102901    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:25.102901    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:22:25.243672    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:22:25.243759    7764 buildroot.go:166] provisioning hostname "ha-340000"
	I0624 04:22:25.243889    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:29.917599    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:29.917684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:29.925952    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:29.926668    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:29.926668    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000 && echo "ha-340000" | sudo tee /etc/hostname
	I0624 04:22:30.097272    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000
	
	I0624 04:22:30.097272    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:32.279403    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:32.279403    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:32.279684    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:34.880756    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:34.880997    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:34.886216    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:34.886431    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:34.886431    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:22:35.042971    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:22:35.042971    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:22:35.042971    7764 buildroot.go:174] setting up certificates
	I0624 04:22:35.042971    7764 provision.go:84] configureAuth start
	I0624 04:22:35.042971    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:37.199678    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:37.199678    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:37.200116    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:41.987549    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:41.987743    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:41.987875    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:44.604967    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:44.605027    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:44.605027    7764 provision.go:143] copyHostCerts
	I0624 04:22:44.605027    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:22:44.605027    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:22:44.605027    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:22:44.605790    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:22:44.606959    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:22:44.607324    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:22:44.607324    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:22:44.607731    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:22:44.608681    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:22:44.608935    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:22:44.608935    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:22:44.608935    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:22:44.610235    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000 san=[127.0.0.1 172.31.219.170 ha-340000 localhost minikube]
	I0624 04:22:45.018783    7764 provision.go:177] copyRemoteCerts
	I0624 04:22:45.037552    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:22:45.037552    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:47.202779    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:47.203671    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:47.203671    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:49.806003    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:49.806250    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:49.806250    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:22:49.923562    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8859914s)
	I0624 04:22:49.923562    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:22:49.924207    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:22:49.970115    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:22:49.970666    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:22:50.017371    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:22:50.017371    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0624 04:22:50.067323    7764 provision.go:87] duration metric: took 15.0242947s to configureAuth
	I0624 04:22:50.067323    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:22:50.068297    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:22:50.068444    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:52.216942    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:52.217239    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:52.217239    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:54.777665    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:54.778687    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:54.787038    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:54.787739    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:54.787739    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:22:54.925775    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:22:54.925775    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:22:54.925775    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:22:54.925775    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:57.101646    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:57.102006    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:57.102157    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:59.628474    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:59.628640    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:59.634236    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:59.634236    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:59.634858    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:22:59.809917    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:22:59.809917    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:04.562670    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:04.562984    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:04.569453    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:04.569618    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:04.569618    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:23:06.793171    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:23:06.793171    7764 machine.go:97] duration metric: took 46.45222s to provisionDockerMachine
	I0624 04:23:06.793171    7764 client.go:171] duration metric: took 1m56.8035863s to LocalClient.Create
	I0624 04:23:06.793171    7764 start.go:167] duration metric: took 1m56.8035863s to libmachine.API.Create "ha-340000"
	I0624 04:23:06.793171    7764 start.go:293] postStartSetup for "ha-340000" (driver="hyperv")
	I0624 04:23:06.793171    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:23:06.806161    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:23:06.806161    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:08.926143    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:08.926143    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:08.926377    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:11.466322    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:11.466322    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:11.467489    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:11.587250    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7810151s)
	I0624 04:23:11.600522    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:23:11.608510    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:23:11.608623    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:23:11.609267    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:23:11.610539    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:23:11.610539    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:23:11.622168    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:23:11.642744    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:23:11.691185    7764 start.go:296] duration metric: took 4.8979958s for postStartSetup
	I0624 04:23:11.694303    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:13.871692    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:13.872159    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:13.872159    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:16.477510    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:16.478115    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:16.478346    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:23:16.481107    7764 start.go:128] duration metric: took 2m6.495872s to createHost
	I0624 04:23:16.481306    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:21.198075    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:21.198075    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:21.203031    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:21.203711    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:21.203711    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:23:21.335720    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228201.341230144
	
	I0624 04:23:21.335720    7764 fix.go:216] guest clock: 1719228201.341230144
	I0624 04:23:21.335720    7764 fix.go:229] Guest: 2024-06-24 04:23:21.341230144 -0700 PDT Remote: 2024-06-24 04:23:16.4812468 -0700 PDT m=+132.145167801 (delta=4.859983344s)
	I0624 04:23:21.335720    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:23.513971    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:23.514115    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:23.514115    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:26.064808    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:26.065655    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:26.071109    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:26.072117    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:26.072117    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228201
	I0624 04:23:26.223364    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:23:21 UTC 2024
	
	I0624 04:23:26.223364    7764 fix.go:236] clock set: Mon Jun 24 11:23:21 UTC 2024
	 (err=<nil>)
	I0624 04:23:26.223364    7764 start.go:83] releasing machines lock for "ha-340000", held for 2m16.2380929s
	I0624 04:23:26.223364    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:30.938259    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:30.938259    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:30.943876    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:23:30.943876    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:30.953669    7764 ssh_runner.go:195] Run: cat /version.json
	I0624 04:23:30.953669    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:33.194566    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:33.195066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:33.195066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:35.952934    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:35.952934    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:35.953511    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:35.972815    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:35.972815    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:35.972815    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:36.049516    7764 ssh_runner.go:235] Completed: cat /version.json: (5.0958285s)
	I0624 04:23:36.062876    7764 ssh_runner.go:195] Run: systemctl --version
	I0624 04:23:36.126207    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1823117s)
	I0624 04:23:36.138544    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 04:23:36.147541    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:23:36.158992    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:23:36.186226    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:23:36.186226    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:23:36.186688    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:23:36.234270    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:23:36.268492    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:23:36.287362    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:23:36.298844    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:23:36.334613    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:23:36.363421    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:23:36.394789    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:23:36.429368    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:23:36.461842    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:23:36.501126    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:23:36.536096    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:23:36.571166    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:23:36.602701    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:23:36.633777    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:36.831442    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:23:36.864134    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:23:36.876552    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:23:36.915498    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:23:36.950864    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:23:36.989119    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:23:37.025144    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:23:37.064005    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:23:37.128646    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:23:37.151027    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:23:37.195115    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:23:37.214468    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:23:37.232424    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:23:37.275601    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:23:37.475078    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:23:37.650885    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:23:37.651141    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:23:37.700501    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:37.884853    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:23:40.393713    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5088508s)
	I0624 04:23:40.405828    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:23:40.441287    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:23:40.474575    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:23:40.686866    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:23:40.889847    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:41.085222    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:23:41.126761    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:23:41.160981    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:41.330340    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:23:41.432864    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:23:41.447686    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:23:41.457333    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:23:41.468396    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:23:41.485785    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:23:41.541304    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:23:41.549992    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:23:41.592759    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:23:41.632360    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:23:41.632455    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:23:41.639061    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:23:41.639061    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:23:41.651675    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:23:41.656963    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:23:41.690104    7764 kubeadm.go:877] updating cluster {Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 04:23:41.690317    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:23:41.699038    7764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 04:23:41.719639    7764 docker.go:685] Got preloaded images: 
	I0624 04:23:41.719639    7764 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0624 04:23:41.733682    7764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 04:23:41.763470    7764 ssh_runner.go:195] Run: which lz4
	I0624 04:23:41.769903    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0624 04:23:41.783414    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 04:23:41.789824    7764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 04:23:41.789824    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0624 04:23:43.609061    7764 docker.go:649] duration metric: took 1.8391507s to copy over tarball
	I0624 04:23:43.622011    7764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 04:23:52.086436    7764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4643447s)
	I0624 04:23:52.086436    7764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 04:23:52.148557    7764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 04:23:52.174937    7764 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0624 04:23:52.226917    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:52.437942    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:23:56.005781    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.5677733s)
	I0624 04:23:56.017780    7764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 04:23:56.042635    7764 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 04:23:56.042635    7764 cache_images.go:84] Images are preloaded, skipping loading
	I0624 04:23:56.042773    7764 kubeadm.go:928] updating node { 172.31.219.170 8443 v1.30.2 docker true true} ...
	I0624 04:23:56.043059    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.219.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:23:56.057045    7764 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 04:23:56.094439    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:23:56.094439    7764 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 04:23:56.094439    7764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 04:23:56.095442    7764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.219.170 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-340000 NodeName:ha-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.219.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.219.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 04:23:56.095442    7764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.219.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-340000"
	  kubeletExtraArgs:
	    node-ip: 172.31.219.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.219.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 04:23:56.095442    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:23:56.108035    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:23:56.138436    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:23:56.138670    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:23:56.151951    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:23:56.171498    7764 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 04:23:56.185108    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0624 04:23:56.202043    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0624 04:23:56.235551    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:23:56.268996    7764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0624 04:23:56.301083    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0624 04:23:56.348034    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:23:56.354061    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:23:56.387829    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:56.575871    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:23:56.605517    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.219.170
	I0624 04:23:56.605517    7764 certs.go:194] generating shared ca certs ...
	I0624 04:23:56.605517    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.606272    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:23:56.606272    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:23:56.606898    7764 certs.go:256] generating profile certs ...
	I0624 04:23:56.607696    7764 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:23:56.607893    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt with IP's: []
	I0624 04:23:56.837938    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt ...
	I0624 04:23:56.837938    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt: {Name:mk7a961717cd144a9a6226fc54cbc5311507d6a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.838921    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key ...
	I0624 04:23:56.838921    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key: {Name:mkb0e92480b41b7bce6e00ed95fc97da3e4d0eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.840444    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd
	I0624 04:23:56.841030    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.223.254]
	I0624 04:23:57.197931    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd ...
	I0624 04:23:57.197931    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd: {Name:mk0f4e42831177c49aaaa6224c50197a22ff86db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.198322    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd ...
	I0624 04:23:57.199325    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd: {Name:mkcb22653d05488567d8983f905ac28f3454628f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.200162    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:23:57.211183    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:23:57.212163    7764 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:23:57.213222    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt with IP's: []
	I0624 04:23:57.742401    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt ...
	I0624 04:23:57.742401    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt: {Name:mk2780f04cc254cb73365d9b3a14af5e323b09a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.744698    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key ...
	I0624 04:23:57.744698    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key: {Name:mk101c0b70a91ff5ab1d2d4d42de1908d2028086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.745282    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:23:57.746323    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:23:57.746541    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:23:57.746729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:23:57.746880    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:23:57.747033    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:23:57.747161    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:23:57.755492    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:23:57.756741    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:23:57.756741    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:23:57.757505    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:23:57.757641    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:23:57.757878    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:23:57.758114    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:23:57.758338    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:23:57.758338    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:23:57.758951    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:23:57.759142    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:57.759142    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:23:57.806308    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:23:57.846394    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:23:57.894406    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:23:57.936976    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0624 04:23:57.981900    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:23:58.026283    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:23:58.071174    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:23:58.118729    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:23:58.162689    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:23:58.209251    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:23:58.264136    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 04:23:58.319246    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:23:58.344517    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:23:58.379159    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:23:58.385894    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:23:58.398545    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:23:58.419137    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:23:58.449497    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:23:58.481085    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.489104    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.502444    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.523530    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:23:58.566687    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:23:58.596556    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.604266    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.616277    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.643356    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:23:58.673517    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:23:58.680941    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:23:58.681514    7764 kubeadm.go:391] StartCluster: {Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:23:58.691375    7764 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 04:23:58.724770    7764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 04:23:58.755204    7764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 04:23:58.784577    7764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 04:23:58.800464    7764 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 04:23:58.800464    7764 kubeadm.go:156] found existing configuration files:
	
	I0624 04:23:58.811776    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 04:23:58.828218    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 04:23:58.839439    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 04:23:58.867660    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 04:23:58.888876    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 04:23:58.900681    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 04:23:58.929572    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 04:23:58.947003    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 04:23:58.959302    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 04:23:58.986530    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 04:23:58.999749    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 04:23:59.011493    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 04:23:59.028055    7764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 04:23:59.464999    7764 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 04:24:15.136135    7764 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0624 04:24:15.136316    7764 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 04:24:15.136432    7764 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 04:24:15.136605    7764 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 04:24:15.136605    7764 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 04:24:15.136605    7764 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 04:24:15.139173    7764 out.go:204]   - Generating certificates and keys ...
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0624 04:24:15.140270    7764 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0624 04:24:15.140395    7764 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0624 04:24:15.140505    7764 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0624 04:24:15.140605    7764 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-340000 localhost] and IPs [172.31.219.170 127.0.0.1 ::1]
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-340000 localhost] and IPs [172.31.219.170 127.0.0.1 ::1]
	I0624 04:24:15.141549    7764 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0624 04:24:15.141710    7764 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0624 04:24:15.141810    7764 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 04:24:15.142393    7764 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 04:24:15.142645    7764 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 04:24:15.142901    7764 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 04:24:15.143067    7764 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 04:24:15.144863    7764 out.go:204]   - Booting up control plane ...
	I0624 04:24:15.145932    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 04:24:15.146750    7764 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 04:24:15.146791    7764 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 04:24:15.146791    7764 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0624 04:24:15.147325    7764 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00217517s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [api-check] The API server is healthy after 9.024821827s
	I0624 04:24:15.148027    7764 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 04:24:15.148229    7764 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 04:24:15.148229    7764 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 04:24:15.148749    7764 kubeadm.go:309] [mark-control-plane] Marking the node ha-340000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 04:24:15.148996    7764 kubeadm.go:309] [bootstrap-token] Using token: uksowa.dnkew0jmxpcatm2d
	I0624 04:24:15.151402    7764 out.go:204]   - Configuring RBAC rules ...
	I0624 04:24:15.151402    7764 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 04:24:15.151402    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 04:24:15.152109    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 04:24:15.152479    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 04:24:15.152815    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 04:24:15.153107    7764 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 04:24:15.153464    7764 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 04:24:15.153636    7764 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 04:24:15.153766    7764 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 04:24:15.153766    7764 kubeadm.go:309] 
	I0624 04:24:15.153975    7764 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 04:24:15.154026    7764 kubeadm.go:309] 
	I0624 04:24:15.154248    7764 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 04:24:15.154248    7764 kubeadm.go:309] 
	I0624 04:24:15.154248    7764 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 04:24:15.154248    7764 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 04:24:15.154248    7764 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 04:24:15.154248    7764 kubeadm.go:309] 
	I0624 04:24:15.154802    7764 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 04:24:15.154802    7764 kubeadm.go:309] 
	I0624 04:24:15.155014    7764 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 04:24:15.155081    7764 kubeadm.go:309] 
	I0624 04:24:15.155261    7764 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 04:24:15.155396    7764 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 04:24:15.155692    7764 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 04:24:15.155892    7764 kubeadm.go:309] 
	I0624 04:24:15.156103    7764 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 04:24:15.156289    7764 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 04:24:15.156344    7764 kubeadm.go:309] 
	I0624 04:24:15.156537    7764 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uksowa.dnkew0jmxpcatm2d \
	I0624 04:24:15.156841    7764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 \
	I0624 04:24:15.156896    7764 kubeadm.go:309] 	--control-plane 
	I0624 04:24:15.156949    7764 kubeadm.go:309] 
	I0624 04:24:15.157051    7764 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 04:24:15.157051    7764 kubeadm.go:309] 
	I0624 04:24:15.157051    7764 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uksowa.dnkew0jmxpcatm2d \
	I0624 04:24:15.157586    7764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 04:24:15.157628    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:24:15.157628    7764 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 04:24:15.160476    7764 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0624 04:24:15.178108    7764 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0624 04:24:15.185898    7764 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0624 04:24:15.185959    7764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0624 04:24:15.236910    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0624 04:24:15.801252    7764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 04:24:15.816797    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000 minikube.k8s.io/updated_at=2024_06_24T04_24_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=true
	I0624 04:24:15.816797    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:15.865474    7764 ops.go:34] apiserver oom_adj: -16
	I0624 04:24:16.053451    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:16.554318    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:17.055668    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:17.555246    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:18.060924    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:18.560879    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:19.064259    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:19.564139    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:20.066338    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:20.554764    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:21.059251    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:21.563335    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:22.064537    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:22.568492    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:23.054248    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:23.554295    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:24.061114    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:24.561845    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:25.053375    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:25.555713    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:26.058954    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:26.556888    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:27.064526    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:27.188097    7764 kubeadm.go:1107] duration metric: took 11.386803s to wait for elevateKubeSystemPrivileges
	W0624 04:24:27.188254    7764 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 04:24:27.188254    7764 kubeadm.go:393] duration metric: took 28.5066348s to StartCluster
	I0624 04:24:27.188254    7764 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:24:27.188625    7764 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:24:27.190266    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:24:27.191900    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0624 04:24:27.191900    7764 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:24:27.191900    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:24:27.191900    7764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 04:24:27.191900    7764 addons.go:69] Setting storage-provisioner=true in profile "ha-340000"
	I0624 04:24:27.191900    7764 addons.go:69] Setting default-storageclass=true in profile "ha-340000"
	I0624 04:24:27.191900    7764 addons.go:234] Setting addon storage-provisioner=true in "ha-340000"
	I0624 04:24:27.191900    7764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-340000"
	I0624 04:24:27.191900    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:24:27.191900    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:24:27.193487    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:27.194012    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:27.352442    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0624 04:24:27.795751    7764 start.go:946] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0624 04:24:29.485008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:29.486001    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:29.485008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:29.486001    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:29.486959    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:24:29.487597    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 04:24:29.489085    7764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 04:24:29.489085    7764 cert_rotation.go:137] Starting client certificate rotation controller
	I0624 04:24:29.489508    7764 addons.go:234] Setting addon default-storageclass=true in "ha-340000"
	I0624 04:24:29.489687    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:24:29.490896    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:29.491561    7764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 04:24:29.491561    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 04:24:29.491561    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:31.839622    7764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 04:24:31.839622    7764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:24:34.221948    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:34.221985    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:34.222081    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:24:34.771646    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:24:34.771987    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:34.772282    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:24:34.936733    7764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 04:24:37.022840    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:24:37.023554    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:37.023763    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:24:37.165177    7764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 04:24:37.341265    7764 round_trippers.go:463] GET https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0624 04:24:37.341349    7764 round_trippers.go:469] Request Headers:
	I0624 04:24:37.341441    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:24:37.341441    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:24:37.356087    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:24:37.356870    7764 round_trippers.go:463] PUT https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0624 04:24:37.356870    7764 round_trippers.go:469] Request Headers:
	I0624 04:24:37.356870    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:24:37.356870    7764 round_trippers.go:473]     Content-Type: application/json
	I0624 04:24:37.356870    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:24:37.360461    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:24:37.365109    7764 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0624 04:24:37.367763    7764 addons.go:510] duration metric: took 10.1758252s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0624 04:24:37.367763    7764 start.go:245] waiting for cluster config update ...
	I0624 04:24:37.367763    7764 start.go:254] writing updated cluster config ...
	I0624 04:24:37.374431    7764 out.go:177] 
	I0624 04:24:37.382235    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:24:37.382235    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:24:37.388747    7764 out.go:177] * Starting "ha-340000-m02" control-plane node in "ha-340000" cluster
	I0624 04:24:37.391432    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:24:37.391432    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:24:37.391432    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:24:37.391969    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:24:37.392074    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:24:37.396446    7764 start.go:360] acquireMachinesLock for ha-340000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:24:37.397173    7764 start.go:364] duration metric: took 727.7µs to acquireMachinesLock for "ha-340000-m02"
	I0624 04:24:37.397173    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:24:37.397173    7764 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0624 04:24:37.400405    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:24:37.400992    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:24:37.400992    7764 client.go:168] LocalClient.Create starting
	I0624 04:24:37.400992    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:24:37.401661    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:24:37.401661    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:24:37.401887    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:24:37.402034    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:24:37.402034    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:24:37.402034    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:24:39.318919    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:24:39.318919    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:39.319549    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:24:41.047822    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:24:41.047822    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:41.048685    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:24:42.553257    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:24:42.553257    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:42.553951    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:24:46.282665    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:24:46.283030    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:46.285249    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:24:46.793132    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:24:47.414021    7764 main.go:141] libmachine: Creating VM...
	I0624 04:24:47.414021    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:24:50.209235    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:24:50.209299    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:50.209299    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:24:50.209299    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:24:51.955162    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:24:51.955946    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:51.955946    7764 main.go:141] libmachine: Creating VHD
	I0624 04:24:51.955946    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:24:55.745966    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 768F4D99-0FAB-4B12-BB36-FE2052C9BA0F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:24:55.746079    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:55.746079    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:24:55.746154    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:24:55.755598    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:24:58.942374    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:24:58.943424    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:58.943503    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd' -SizeBytes 20000MB
	I0624 04:25:01.505369    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:01.505433    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:01.505433    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:25:05.172912    7764 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-340000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:25:05.173540    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:05.173540    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000-m02 -DynamicMemoryEnabled $false
	I0624 04:25:07.453955    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:07.454137    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:07.454250    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000-m02 -Count 2
	I0624 04:25:09.608832    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:09.608994    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:09.608994    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\boot2docker.iso'
	I0624 04:25:12.197231    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:12.197231    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:12.197345    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd'
	I0624 04:25:14.893400    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:14.894320    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:14.894381    7764 main.go:141] libmachine: Starting VM...
	I0624 04:25:14.894430    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000-m02
	I0624 04:25:17.989169    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:17.989169    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:17.989169    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:25:17.989814    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:20.355744    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:20.356405    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:20.356405    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:22.989302    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:22.989355    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:23.999995    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:26.249874    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:26.250015    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:26.250090    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:28.897605    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:28.897605    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:29.905864    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:32.170766    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:32.170766    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:32.170897    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:34.772612    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:34.772697    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:35.780004    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:38.017664    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:38.018480    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:38.018552    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:40.601923    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:40.601923    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:41.606232    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:43.841714    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:43.841714    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:43.842207    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:48.637835    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:48.637835    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:48.637835    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:25:48.638845    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:50.886637    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:50.886637    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:50.886919    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:53.505477    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:53.505477    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:53.512076    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:25:53.522242    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:25:53.522242    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:25:53.639546    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:25:53.639617    7764 buildroot.go:166] provisioning hostname "ha-340000-m02"
	I0624 04:25:53.639617    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:55.827051    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:55.827051    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:55.827150    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:58.375036    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:58.375036    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:58.380693    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:25:58.381469    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:25:58.381469    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000-m02 && echo "ha-340000-m02" | sudo tee /etc/hostname
	I0624 04:25:58.528102    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000-m02
	
	I0624 04:25:58.528102    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:00.719171    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:00.720126    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:00.720206    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:03.334838    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:03.335192    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:03.340445    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:03.340445    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:03.340971    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:26:03.486211    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:26:03.487218    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:26:03.487218    7764 buildroot.go:174] setting up certificates
	I0624 04:26:03.487218    7764 provision.go:84] configureAuth start
	I0624 04:26:03.487218    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:05.662799    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:05.663055    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:05.663055    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:08.255722    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:08.255810    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:08.255880    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:13.036980    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:13.036980    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:13.037081    7764 provision.go:143] copyHostCerts
	I0624 04:26:13.037081    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:26:13.037081    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:26:13.037081    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:26:13.037806    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:26:13.039153    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:26:13.039509    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:26:13.039548    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:26:13.039651    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:26:13.040819    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:26:13.041319    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:26:13.041319    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:26:13.041534    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:26:13.042819    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000-m02 san=[127.0.0.1 172.31.216.99 ha-340000-m02 localhost minikube]
	I0624 04:26:13.402007    7764 provision.go:177] copyRemoteCerts
	I0624 04:26:13.414354    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:26:13.414354    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:15.595976    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:15.596447    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:15.596532    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:18.162535    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:18.162764    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:18.162764    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:26:18.260547    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8461764s)
	I0624 04:26:18.261519    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:26:18.261519    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:26:18.310608    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:26:18.311101    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:26:18.360412    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:26:18.361400    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0624 04:26:18.409824    7764 provision.go:87] duration metric: took 14.922552s to configureAuth
	I0624 04:26:18.409878    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:26:18.409878    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:26:18.410406    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:20.545474    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:20.545474    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:20.546009    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:23.115223    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:23.115223    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:23.121559    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:23.122148    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:23.122349    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:26:23.250055    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:26:23.250055    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:26:23.250598    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:26:23.250598    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:25.409363    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:25.409363    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:25.409459    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:27.987115    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:27.987185    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:27.992587    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:27.993439    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:27.993532    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.219.170"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:26:28.146924    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.219.170
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:26:28.146924    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:32.919684    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:32.919684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:32.923724    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:32.924696    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:32.924696    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:26:35.093030    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:26:35.093030    7764 machine.go:97] duration metric: took 46.4550276s to provisionDockerMachine
	I0624 04:26:35.093030    7764 client.go:171] duration metric: took 1m57.6916074s to LocalClient.Create
	I0624 04:26:35.093030    7764 start.go:167] duration metric: took 1m57.6916074s to libmachine.API.Create "ha-340000"
	I0624 04:26:35.093030    7764 start.go:293] postStartSetup for "ha-340000-m02" (driver="hyperv")
	I0624 04:26:35.093030    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:26:35.106015    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:26:35.106015    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:37.247227    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:37.247227    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:37.247556    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:39.804070    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:39.804720    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:39.804853    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:26:39.901290    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7952578s)
	I0624 04:26:39.914835    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:26:39.922603    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:26:39.922603    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:26:39.922603    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:26:39.923840    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:26:39.923840    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:26:39.936239    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:26:39.955354    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:26:40.001075    7764 start.go:296] duration metric: took 4.908027s for postStartSetup
	I0624 04:26:40.003712    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:42.166041    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:42.166041    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:42.166806    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:44.756066    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:44.756308    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:44.756442    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:26:44.758843    7764 start.go:128] duration metric: took 2m7.3612037s to createHost
	I0624 04:26:44.758843    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:46.909322    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:46.910138    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:46.910232    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:49.479126    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:49.479126    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:49.486897    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:49.487454    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:49.487521    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:26:49.611913    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228409.617659899
	
	I0624 04:26:49.611992    7764 fix.go:216] guest clock: 1719228409.617659899
	I0624 04:26:49.611992    7764 fix.go:229] Guest: 2024-06-24 04:26:49.617659899 -0700 PDT Remote: 2024-06-24 04:26:44.7588432 -0700 PDT m=+340.421999101 (delta=4.858816699s)
	I0624 04:26:49.612053    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:51.806686    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:51.806686    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:51.807796    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:54.360122    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:54.360717    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:54.366743    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:54.367456    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:54.367456    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228409
	I0624 04:26:54.501554    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:26:49 UTC 2024
	
	I0624 04:26:54.501554    7764 fix.go:236] clock set: Mon Jun 24 11:26:49 UTC 2024
	 (err=<nil>)
	I0624 04:26:54.501554    7764 start.go:83] releasing machines lock for "ha-340000-m02", held for 2m17.1038801s
	I0624 04:26:54.501554    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:56.656929    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:56.656929    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:56.657237    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:59.232814    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:59.233696    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:59.238013    7764 out.go:177] * Found network options:
	I0624 04:26:59.240974    7764 out.go:177]   - NO_PROXY=172.31.219.170
	W0624 04:26:59.243674    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:26:59.245988    7764 out.go:177]   - NO_PROXY=172.31.219.170
	W0624 04:26:59.248651    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:26:59.250072    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:26:59.253482    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:26:59.253655    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:59.263647    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 04:26:59.263647    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:27:01.487876    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:01.488890    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:01.488928    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:01.488928    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:01.488996    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:01.488996    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:04.227031    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:27:04.227031    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:04.227248    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:27:04.253655    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:27:04.254270    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:04.254455    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:27:04.325713    7764 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0619239s)
	W0624 04:27:04.325787    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:27:04.339117    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:27:04.410631    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:27:04.410771    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:27:04.410771    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1572701s)
	I0624 04:27:04.410943    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:27:04.456796    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:27:04.488782    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:27:04.511209    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:27:04.525207    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:27:04.558383    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:27:04.594351    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:27:04.632156    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:27:04.668644    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:27:04.700592    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:27:04.731598    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:27:04.764902    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:27:04.797247    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:27:04.829805    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:27:04.865930    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:05.068575    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:27:05.101121    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:27:05.115315    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:27:05.153433    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:27:05.186606    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:27:05.229598    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:27:05.265598    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:27:05.304154    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:27:05.362709    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:27:05.384787    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:27:05.427339    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:27:05.444634    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:27:05.460900    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:27:05.500871    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:27:05.685702    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:27:05.875302    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:27:05.875472    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:27:05.922983    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:06.124475    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:27:08.649108    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.524624s)
	I0624 04:27:08.662442    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:27:08.698813    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:27:08.734411    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:27:08.926039    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:27:09.149621    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:09.357548    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:27:09.401745    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:27:09.440775    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:09.652315    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:27:09.760985    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:27:09.773475    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:27:09.783373    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:27:09.798224    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:27:09.815600    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:27:09.874275    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:27:09.885275    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:27:09.928991    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:27:09.966535    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:27:09.969141    7764 out.go:177]   - env NO_PROXY=172.31.219.170
	I0624 04:27:09.971901    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:27:09.980875    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:27:09.980875    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:27:09.993751    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:27:09.998900    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:27:10.021742    7764 mustload.go:65] Loading cluster: ha-340000
	I0624 04:27:10.022138    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:27:10.022138    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:12.145793    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:12.146689    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:12.146689    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:27:12.147545    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.216.99
	I0624 04:27:12.147545    7764 certs.go:194] generating shared ca certs ...
	I0624 04:27:12.147672    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.148287    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:27:12.148567    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:27:12.148856    7764 certs.go:256] generating profile certs ...
	I0624 04:27:12.149438    7764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:27:12.149513    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773
	I0624 04:27:12.149734    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.216.99 172.31.223.254]
	I0624 04:27:12.535121    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 ...
	I0624 04:27:12.535121    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773: {Name:mk8a3e94f1cd57107053c19999e9ccd02984f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.537249    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773 ...
	I0624 04:27:12.537249    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773: {Name:mk3e4f8cf08b142d4c6b8b2e4d0c2e9e09cde3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.538494    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:27:12.548871    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:27:12.549729    7764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:27:12.549729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:27:12.549729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:27:12.550813    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:27:12.550846    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:27:12.551173    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:27:12.551173    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:27:12.551525    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:27:12.551851    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:27:12.552013    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:27:12.552013    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:27:12.552625    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:27:12.552788    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:27:12.553076    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:27:12.553339    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:27:12.553609    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:27:12.553609    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:27:12.554183    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:27:12.554343    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:12.554488    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:14.719497    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:14.719833    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:14.719833    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:17.350977    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:27:17.351051    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:17.351051    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:27:17.461926    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0624 04:27:17.470044    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0624 04:27:17.501978    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0624 04:27:17.511100    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0624 04:27:17.543488    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0624 04:27:17.550954    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0624 04:27:17.583396    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0624 04:27:17.590351    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0624 04:27:17.627495    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0624 04:27:17.633981    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0624 04:27:17.667148    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0624 04:27:17.673386    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0624 04:27:17.693848    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:27:17.745421    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:27:17.791072    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:27:17.839792    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:27:17.886067    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0624 04:27:17.938597    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:27:17.985439    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:27:18.029804    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:27:18.082647    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:27:18.126440    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:27:18.175997    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:27:18.225216    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0624 04:27:18.257530    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0624 04:27:18.290661    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0624 04:27:18.321759    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0624 04:27:18.355194    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0624 04:27:18.389446    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0624 04:27:18.423705    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0624 04:27:18.466299    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:27:18.488719    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:27:18.524815    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.532250    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.546205    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.570264    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:27:18.603714    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:27:18.634492    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.641875    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.655114    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.680930    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:27:18.712285    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:27:18.746203    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:27:18.753921    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:27:18.767115    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:27:18.790163    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:27:18.824124    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:27:18.830338    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:27:18.830338    7764 kubeadm.go:928] updating node {m02 172.31.216.99 8443 v1.30.2 docker true true} ...
	I0624 04:27:18.830338    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:27:18.830861    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:27:18.844401    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:27:18.871458    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:27:18.872308    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:27:18.885304    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:27:18.906359    7764 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0624 04:27:18.917667    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0624 04:27:18.940653    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl
	I0624 04:27:18.940770    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet
	I0624 04:27:18.940770    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm
	I0624 04:27:20.019379    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:27:20.031677    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:27:20.040388    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 04:27:20.040619    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0624 04:27:22.295204    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:27:22.311229    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:27:22.319343    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 04:27:22.319532    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0624 04:27:24.787183    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:27:24.813058    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:27:24.827112    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:27:24.836021    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 04:27:24.836326    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0624 04:27:25.350318    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0624 04:27:25.369869    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0624 04:27:25.404681    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:27:25.434302    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0624 04:27:25.475142    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:27:25.481186    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:27:25.513681    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:25.719380    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:27:25.749842    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:27:25.749842    7764 start.go:316] joinCluster: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:27:25.749842    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0624 04:27:25.750824    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:27.897413    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:27.898468    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:27.898580    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:30.474274    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:27:30.475230    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:30.475465    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:27:30.691544    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9407016s)
	I0624 04:27:30.691544    7764 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:27:30.691544    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jly6bg.uk30wjiudedznfhh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m02 --control-plane --apiserver-advertise-address=172.31.216.99 --apiserver-bind-port=8443"
	I0624 04:28:10.348302    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jly6bg.uk30wjiudedznfhh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m02 --control-plane --apiserver-advertise-address=172.31.216.99 --apiserver-bind-port=8443": (39.6566068s)
	I0624 04:28:10.348302    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0624 04:28:11.123441    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000-m02 minikube.k8s.io/updated_at=2024_06_24T04_28_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=false
	I0624 04:28:11.296357    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-340000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0624 04:28:11.477881    7764 start.go:318] duration metric: took 45.7278657s to joinCluster
	I0624 04:28:11.479101    7764 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:28:11.479942    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:28:11.487444    7764 out.go:177] * Verifying Kubernetes components...
	I0624 04:28:11.505070    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:28:11.857429    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:28:11.909471    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:28:11.910146    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0624 04:28:11.910146    7764 kubeadm.go:477] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.219.170:8443
	I0624 04:28:11.911347    7764 node_ready.go:35] waiting up to 6m0s for node "ha-340000-m02" to be "Ready" ...
	I0624 04:28:11.911438    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:11.911588    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:11.911588    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:11.911588    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:11.927463    7764 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 04:28:12.413201    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:12.413550    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:12.413550    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:12.413626    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:12.444372    7764 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0624 04:28:12.925886    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:12.925983    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:12.925983    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:12.925983    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:12.932451    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:13.417147    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:13.417227    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:13.417227    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:13.417227    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:13.423642    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:13.926152    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:13.926272    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:13.926272    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:13.926272    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:13.931245    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:13.932638    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:14.425494    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:14.425494    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:14.425494    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:14.425494    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:14.431769    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:14.916726    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:14.916779    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:14.916779    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:14.916779    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:14.921933    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:15.414395    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:15.414437    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:15.414437    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:15.414437    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:15.424101    7764 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 04:28:15.921512    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:15.921512    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:15.921512    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:15.921512    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:15.925599    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:16.427098    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:16.427098    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:16.427098    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:16.427098    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:16.432021    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:16.433661    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:16.920270    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:16.920303    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:16.920303    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:16.920303    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:16.924782    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:17.414532    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:17.414701    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:17.414701    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:17.414763    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:17.419689    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:17.920789    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:17.920789    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:17.920789    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:17.920789    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:17.925280    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:18.414334    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:18.414334    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:18.414717    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:18.414717    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:18.422941    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:28:18.921289    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:18.921289    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:18.921289    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:18.921289    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.042617    7764 round_trippers.go:574] Response Status: 200 OK in 121 milliseconds
	I0624 04:28:19.043561    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:19.415270    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:19.415270    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:19.415572    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:19.415572    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.423845    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:28:19.918838    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:19.918911    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:19.918911    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:19.918911    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.924705    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:20.422302    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:20.422302    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:20.422302    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:20.422302    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:20.439795    7764 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0624 04:28:20.912535    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:20.912750    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:20.912750    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:20.912750    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:20.917252    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:21.412416    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:21.412416    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:21.412517    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:21.412517    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:21.417804    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:21.419265    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:21.927253    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:21.927474    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:21.927474    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:21.927474    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:21.931529    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:22.425328    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:22.425533    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:22.425533    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:22.425632    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:22.430375    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:22.924882    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:22.924882    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:22.924882    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:22.924882    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:22.929881    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:23.427060    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:23.427060    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:23.427060    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:23.427060    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:23.432229    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:23.434031    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:23.925304    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:23.925304    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:23.925304    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:23.925304    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:23.928986    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.425319    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.425703    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.425703    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.425703    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.430231    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.926082    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.926082    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.926082    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.926082    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.930652    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.931873    7764 node_ready.go:49] node "ha-340000-m02" has status "Ready":"True"
	I0624 04:28:24.931957    7764 node_ready.go:38] duration metric: took 13.0205605s for node "ha-340000-m02" to be "Ready" ...
	I0624 04:28:24.932080    7764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:28:24.932215    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:24.932327    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.932327    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.932327    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.939750    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:24.948492    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.948492    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6xxtk
	I0624 04:28:24.949022    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.949022    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.949022    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.952872    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.954581    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.954581    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.954581    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.954581    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.959287    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.959977    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.959977    7764 pod_ready.go:81] duration metric: took 11.4848ms for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.959977    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.959977    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6zh6m
	I0624 04:28:24.959977    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.959977    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.959977    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.964602    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.965299    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.965299    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.965299    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.965502    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.969747    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.971353    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.971456    7764 pod_ready.go:81] duration metric: took 11.4793ms for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.971456    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.971641    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000
	I0624 04:28:24.971778    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.971801    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.971801    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.975248    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.975850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.975850    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.975850    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.975850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.980638    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.981301    7764 pod_ready.go:92] pod "etcd-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.981301    7764 pod_ready.go:81] duration metric: took 9.8443ms for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.981301    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.981301    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m02
	I0624 04:28:24.981301    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.981301    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.981301    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.984815    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.985812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.985812    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.985812    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.985812    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.990897    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:24.990897    7764 pod_ready.go:92] pod "etcd-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.990897    7764 pod_ready.go:81] duration metric: took 9.5963ms for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.990897    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.129269    7764 request.go:629] Waited for 138.2823ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:28:25.129345    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:28:25.129345    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.129345    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.129345    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.134197    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:25.332822    7764 request.go:629] Waited for 196.6808ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:25.332822    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:25.332822    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.333047    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.333047    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.339665    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:25.340354    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:25.340354    7764 pod_ready.go:81] duration metric: took 349.4557ms for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.340354    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.535873    7764 request.go:629] Waited for 195.3415ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.536188    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.536188    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.536188    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.536188    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.540667    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:25.740878    7764 request.go:629] Waited for 198.6579ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:25.741071    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:25.741071    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.741071    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.741071    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.755930    7764 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 04:28:25.929219    7764 request.go:629] Waited for 78.8704ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.929219    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.929219    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.929219    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.929495    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.945459    7764 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 04:28:26.134444    7764 request.go:629] Waited for 187.8089ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.134510    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.134571    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.134571    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.134645    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.140071    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.353705    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:26.353705    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.353705    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.353705    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.359521    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.526359    7764 request.go:629] Waited for 165.4633ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.526448    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.526448    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.526543    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.526543    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.532021    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.532527    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:26.532527    7764 pod_ready.go:81] duration metric: took 1.1921684s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.532527    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.732704    7764 request.go:629] Waited for 200.1103ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:28:26.732812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:28:26.732812    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.732925    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.732925    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.738833    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.936664    7764 request.go:629] Waited for 197.0492ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:26.936664    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:26.936664    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.936664    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.936664    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.941663    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:26.942676    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:26.942676    7764 pod_ready.go:81] duration metric: took 410.1475ms for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.942676    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.127937    7764 request.go:629] Waited for 184.7747ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:28:27.128099    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:28:27.128099    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.128099    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.128099    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.132699    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:27.331014    7764 request.go:629] Waited for 196.3222ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.331322    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.331322    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.331410    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.331425    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.336819    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:27.337011    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:27.337011    7764 pod_ready.go:81] duration metric: took 394.3333ms for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.337011    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.535919    7764 request.go:629] Waited for 198.3729ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:28:27.536120    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:28:27.536195    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.536195    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.536195    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.541904    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:27.739023    7764 request.go:629] Waited for 195.8991ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.739136    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.739136    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.739136    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.739355    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.746854    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:27.748781    7764 pod_ready.go:92] pod "kube-proxy-87bnm" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:27.748849    7764 pod_ready.go:81] duration metric: took 411.837ms for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.748849    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.927519    7764 request.go:629] Waited for 178.5983ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:28:27.927980    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:28:27.927980    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.928033    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.928033    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.933267    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.132952    7764 request.go:629] Waited for 198.3849ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.133121    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.133121    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.133121    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.133121    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.137793    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:28.140164    7764 pod_ready.go:92] pod "kube-proxy-jktx8" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.140164    7764 pod_ready.go:81] duration metric: took 391.3129ms for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.140249    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.335196    7764 request.go:629] Waited for 194.6957ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:28:28.335344    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:28:28.335344    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.335459    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.335459    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.340533    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.536425    7764 request.go:629] Waited for 193.8862ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.536783    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.536783    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.536783    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.536783    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.541533    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:28.543580    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.543580    7764 pod_ready.go:81] duration metric: took 403.3296ms for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.543580    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.738335    7764 request.go:629] Waited for 194.4595ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:28:28.738431    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:28:28.738431    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.738506    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.738506    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.748022    7764 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 04:28:28.941842    7764 request.go:629] Waited for 193.3654ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:28.941928    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:28.941928    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.941928    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.941928    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.947294    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.948720    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.948834    7764 pod_ready.go:81] duration metric: took 405.2524ms for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.948834    7764 pod_ready.go:38] duration metric: took 4.0166767s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:28:28.948834    7764 api_server.go:52] waiting for apiserver process to appear ...
	I0624 04:28:28.962227    7764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:28:28.990048    7764 api_server.go:72] duration metric: took 17.5108258s to wait for apiserver process to appear ...
	I0624 04:28:28.990169    7764 api_server.go:88] waiting for apiserver healthz status ...
	I0624 04:28:28.990250    7764 api_server.go:253] Checking apiserver healthz at https://172.31.219.170:8443/healthz ...
	I0624 04:28:29.001041    7764 api_server.go:279] https://172.31.219.170:8443/healthz returned 200:
	ok
	I0624 04:28:29.001264    7764 round_trippers.go:463] GET https://172.31.219.170:8443/version
	I0624 04:28:29.001304    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.001347    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.001347    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.001974    7764 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 04:28:29.003171    7764 api_server.go:141] control plane version: v1.30.2
	I0624 04:28:29.003281    7764 api_server.go:131] duration metric: took 13.0017ms to wait for apiserver health ...
	I0624 04:28:29.003281    7764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 04:28:29.130016    7764 request.go:629] Waited for 126.6316ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.130380    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.130380    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.130485    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.130485    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.137976    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:29.145789    7764 system_pods.go:59] 17 kube-system pods found
	I0624 04:28:29.145789    7764 system_pods.go:61] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:28:29.145789    7764 system_pods.go:74] duration metric: took 142.5067ms to wait for pod list to return data ...
	I0624 04:28:29.145789    7764 default_sa.go:34] waiting for default service account to be created ...
	I0624 04:28:29.332967    7764 request.go:629] Waited for 187.1779ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:28:29.333090    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:28:29.333090    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.333090    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.333090    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.338765    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:29.339489    7764 default_sa.go:45] found service account: "default"
	I0624 04:28:29.339489    7764 default_sa.go:55] duration metric: took 193.6992ms for default service account to be created ...
	I0624 04:28:29.339489    7764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 04:28:29.535869    7764 request.go:629] Waited for 196.3794ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.536019    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.536019    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.536019    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.536094    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.547818    7764 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0624 04:28:29.560320    7764 system_pods.go:86] 17 kube-system pods found
	I0624 04:28:29.560320    7764 system_pods.go:89] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:28:29.560320    7764 system_pods.go:89] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:28:29.560320    7764 system_pods.go:89] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:28:29.560944    7764 system_pods.go:89] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:28:29.560987    7764 system_pods.go:89] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:28:29.560987    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:28:29.561901    7764 system_pods.go:89] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:28:29.561901    7764 system_pods.go:126] duration metric: took 222.4119ms to wait for k8s-apps to be running ...
	I0624 04:28:29.562055    7764 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 04:28:29.574442    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:28:29.606095    7764 system_svc.go:56] duration metric: took 44.0393ms WaitForService to wait for kubelet
	I0624 04:28:29.606198    7764 kubeadm.go:576] duration metric: took 18.1269732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:28:29.606270    7764 node_conditions.go:102] verifying NodePressure condition ...
	I0624 04:28:29.740727    7764 request.go:629] Waited for 134.2019ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes
	I0624 04:28:29.740855    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes
	I0624 04:28:29.740855    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.740855    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.740855    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.745626    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:29.747200    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:28:29.747200    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:28:29.747200    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:28:29.747200    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:28:29.747200    7764 node_conditions.go:105] duration metric: took 140.9292ms to run NodePressure ...
	I0624 04:28:29.747200    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:28:29.747200    7764 start.go:254] writing updated cluster config ...
	I0624 04:28:29.751934    7764 out.go:177] 
	I0624 04:28:29.766046    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:28:29.766832    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:28:29.774794    7764 out.go:177] * Starting "ha-340000-m03" control-plane node in "ha-340000" cluster
	I0624 04:28:29.777301    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:28:29.777301    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:28:29.777301    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:28:29.777301    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:28:29.777301    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:28:29.781158    7764 start.go:360] acquireMachinesLock for ha-340000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:28:29.781158    7764 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-340000-m03"
	I0624 04:28:29.781158    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:28:29.781158    7764 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0624 04:28:29.784224    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:28:29.785202    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:28:29.785202    7764 client.go:168] LocalClient.Create starting
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:28:31.780286    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:28:31.780368    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:31.780402    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:28:33.569119    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:28:33.569119    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:33.569266    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:28:35.138412    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:28:35.138412    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:35.139253    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:28:38.939051    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:28:38.939109    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:38.941295    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:28:39.408348    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:28:39.639687    7764 main.go:141] libmachine: Creating VM...
	I0624 04:28:39.639687    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:28:42.623268    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:28:42.623268    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:42.623268    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:28:42.623422    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:28:44.395580    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:28:44.395580    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:44.395696    7764 main.go:141] libmachine: Creating VHD
	I0624 04:28:44.395696    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:28:48.321070    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B83AC5C6-1D67-49AC-95A5-608E946249BA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:28:48.321070    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:48.321070    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:28:48.321070    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:28:48.333118    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:28:51.596942    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:28:51.596942    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:51.597759    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd' -SizeBytes 20000MB
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-340000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000-m03 -DynamicMemoryEnabled $false
	I0624 04:29:00.368389    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:00.368389    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:00.368481    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000-m03 -Count 2
	I0624 04:29:02.606386    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:02.607068    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:02.607068    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\boot2docker.iso'
	I0624 04:29:05.261886    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:05.261954    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:05.262027    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd'
	I0624 04:29:08.070160    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:08.070350    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:08.070350    7764 main.go:141] libmachine: Starting VM...
	I0624 04:29:08.070443    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000-m03
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:11.195279    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:13.598113    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:13.598113    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:13.598340    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:16.203468    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:16.203544    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:17.209068    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:19.524986    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:19.524986    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:19.525195    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:22.187835    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:22.188843    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:23.198023    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:25.481002    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:25.481290    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:25.481290    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:28.104506    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:28.104506    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:29.106735    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:31.394356    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:31.394356    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:31.394743    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:33.998177    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:33.998177    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:35.007839    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:37.326080    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:37.326122    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:37.326219    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:40.003971    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:40.003971    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:40.004066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:42.217863    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:44.517721    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:44.518228    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:44.518527    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:47.201560    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:47.201560    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:47.207577    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:47.218297    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:47.218297    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:29:47.360483    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:29:47.360551    7764 buildroot.go:166] provisioning hostname "ha-340000-m03"
	I0624 04:29:47.360618    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:52.193887    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:52.193887    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:52.201303    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:52.201303    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:52.201927    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000-m03 && echo "ha-340000-m03" | sudo tee /etc/hostname
	I0624 04:29:52.364604    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000-m03
	
	I0624 04:29:52.365299    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:57.182693    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:57.182693    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:57.189128    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:57.189128    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:57.189653    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:29:57.332038    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:29:57.332182    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:29:57.332182    7764 buildroot.go:174] setting up certificates
	I0624 04:29:57.332182    7764 provision.go:84] configureAuth start
	I0624 04:29:57.332331    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:59.502324    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:59.503315    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:59.503378    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:02.186897    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:02.186897    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:02.187015    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:07.011118    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:07.011118    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:07.011118    7764 provision.go:143] copyHostCerts
	I0624 04:30:07.011118    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:30:07.011654    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:30:07.011654    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:30:07.011920    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:30:07.013185    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:30:07.013934    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:30:07.013934    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:30:07.014668    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:30:07.015554    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:30:07.016090    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:30:07.016090    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:30:07.016393    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:30:07.017875    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000-m03 san=[127.0.0.1 172.31.215.46 ha-340000-m03 localhost minikube]
	I0624 04:30:07.220794    7764 provision.go:177] copyRemoteCerts
	I0624 04:30:07.235765    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:30:07.235765    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:09.413047    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:09.413047    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:09.413954    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:12.074745    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:12.075228    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:12.075332    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:12.182661    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9468776s)
	I0624 04:30:12.182661    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:30:12.183193    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:30:12.231784    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:30:12.231784    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:30:12.280832    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:30:12.281548    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0624 04:30:12.334118    7764 provision.go:87] duration metric: took 15.001877s to configureAuth
	I0624 04:30:12.334118    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:30:12.334804    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:30:12.334915    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:14.531074    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:14.532040    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:14.532156    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:17.153618    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:17.154602    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:17.160547    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:17.161272    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:17.161272    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:30:17.301973    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:30:17.301973    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:30:17.301973    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:30:17.301973    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:22.189547    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:22.190217    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:22.196085    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:22.196931    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:22.196931    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.219.170"
	Environment="NO_PROXY=172.31.219.170,172.31.216.99"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:30:22.355754    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.219.170
	Environment=NO_PROXY=172.31.219.170,172.31.216.99
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:30:22.355852    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:24.538476    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:24.538476    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:24.538564    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:27.211904    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:27.211904    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:27.219219    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:27.219430    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:27.219430    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:30:29.424476    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:30:29.424476    7764 machine.go:97] duration metric: took 47.206429s to provisionDockerMachine
	I0624 04:30:29.424476    7764 client.go:171] duration metric: took 1m59.6388071s to LocalClient.Create
	I0624 04:30:29.424476    7764 start.go:167] duration metric: took 1m59.6388071s to libmachine.API.Create "ha-340000"
	I0624 04:30:29.424476    7764 start.go:293] postStartSetup for "ha-340000-m03" (driver="hyperv")
	I0624 04:30:29.424476    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:30:29.436940    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:30:29.436940    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:31.668008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:31.668008    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:31.668381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:34.298028    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:34.298028    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:34.298967    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:34.413175    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9762153s)
	I0624 04:30:34.426626    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:30:34.433763    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:30:34.433763    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:30:34.434369    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:30:34.435408    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:30:34.435408    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:30:34.447296    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:30:34.473393    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:30:34.531482    7764 start.go:296] duration metric: took 5.106986s for postStartSetup
	I0624 04:30:34.534738    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:36.714018    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:36.714235    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:36.714235    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:39.323294    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:39.323294    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:39.324343    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:30:39.326579    7764 start.go:128] duration metric: took 2m9.5449154s to createHost
	I0624 04:30:39.326579    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:41.514851    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:41.515375    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:41.515375    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:44.135098    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:44.135098    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:44.141184    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:44.141663    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:44.141731    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:30:44.276871    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228644.278623547
	
	I0624 04:30:44.276984    7764 fix.go:216] guest clock: 1719228644.278623547
	I0624 04:30:44.276984    7764 fix.go:229] Guest: 2024-06-24 04:30:44.278623547 -0700 PDT Remote: 2024-06-24 04:30:39.3265792 -0700 PDT m=+574.988835701 (delta=4.952044347s)
	I0624 04:30:44.277077    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:46.464541    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:46.464907    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:46.465156    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:49.078161    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:49.078926    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:49.085962    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:49.085962    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:49.085962    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228644
	I0624 04:30:49.236292    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:30:44 UTC 2024
	
	I0624 04:30:49.236352    7764 fix.go:236] clock set: Mon Jun 24 11:30:44 UTC 2024
	 (err=<nil>)
	I0624 04:30:49.236352    7764 start.go:83] releasing machines lock for "ha-340000-m03", held for 2m19.4546498s
	I0624 04:30:49.236603    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:51.396838    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:51.397038    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:51.397327    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:54.042997    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:54.042997    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:54.045907    7764 out.go:177] * Found network options:
	I0624 04:30:54.048782    7764 out.go:177]   - NO_PROXY=172.31.219.170,172.31.216.99
	W0624 04:30:54.050916    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.050916    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:30:54.053276    7764 out.go:177]   - NO_PROXY=172.31.219.170,172.31.216.99
	W0624 04:30:54.054917    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.054917    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.056874    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.056874    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:30:54.058948    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:30:54.058948    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:54.070138    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 04:30:54.070138    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:56.323684    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:56.323684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:59.018407    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:59.018496    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:59.018496    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:59.043926    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:59.043926    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:59.044183    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:59.123817    7764 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0536586s)
	W0624 04:30:59.123913    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:30:59.137150    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:30:59.198368    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:30:59.198368    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1394001s)
	I0624 04:30:59.198368    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:30:59.199091    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:30:59.249441    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:30:59.279412    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:30:59.298354    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:30:59.311075    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:30:59.345601    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:30:59.380032    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:30:59.416575    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:30:59.448533    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:30:59.482706    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:30:59.516475    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:30:59.548066    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:30:59.582199    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:30:59.612398    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:30:59.641567    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:30:59.839919    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:30:59.874996    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:30:59.888286    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:30:59.932500    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:30:59.967424    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:31:00.020664    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:31:00.062903    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:31:00.103099    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:31:00.164946    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:31:00.190709    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:31:00.240234    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:31:00.257983    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:31:00.276180    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:31:00.322485    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:31:00.542359    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:31:00.734150    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:31:00.734370    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:31:00.779653    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:00.981710    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:31:03.515854    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5341346s)
	I0624 04:31:03.527767    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:31:03.563828    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:31:03.599962    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:31:03.796479    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:31:04.004600    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:04.212703    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:31:04.257264    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:31:04.297196    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:04.515786    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:31:04.623019    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:31:04.637062    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:31:04.645350    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:31:04.662781    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:31:04.680982    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:31:04.736333    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:31:04.747544    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:31:04.792538    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:31:04.827774    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:31:04.830135    7764 out.go:177]   - env NO_PROXY=172.31.219.170
	I0624 04:31:04.832276    7764 out.go:177]   - env NO_PROXY=172.31.219.170,172.31.216.99
	I0624 04:31:04.835401    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:31:04.841159    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:31:04.841159    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:31:04.854165    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:31:04.861182    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:31:04.883034    7764 mustload.go:65] Loading cluster: ha-340000
	I0624 04:31:04.883693    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:31:04.883975    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:07.051544    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:07.051544    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:07.051544    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:31:07.052940    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.215.46
	I0624 04:31:07.053001    7764 certs.go:194] generating shared ca certs ...
	I0624 04:31:07.053057    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.053632    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:31:07.053952    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:31:07.054200    7764 certs.go:256] generating profile certs ...
	I0624 04:31:07.054451    7764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:31:07.054451    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28
	I0624 04:31:07.055012    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.216.99 172.31.215.46 172.31.223.254]
	I0624 04:31:07.218618    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 ...
	I0624 04:31:07.218618    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28: {Name:mk7c1cfb6b5dddd8b7b8e040cea23942dd2d96aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.220588    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28 ...
	I0624 04:31:07.220588    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28: {Name:mk345b96410dd305797032f83b6a7a4525eab593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.221577    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:31:07.233081    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:31:07.234035    7764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:31:07.234035    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:31:07.234035    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:31:07.234927    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:31:07.235144    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:31:07.235958    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:31:07.236186    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:31:07.236765    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:31:07.237694    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:31:07.237694    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:31:07.238419    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:31:07.238548    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:07.238548    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:31:07.238548    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:09.443461    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:09.443461    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:09.443721    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:31:12.107653    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:31:12.107653    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:12.107902    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:31:12.224109    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0624 04:31:12.232391    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0624 04:31:12.270365    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0624 04:31:12.279016    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0624 04:31:12.310163    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0624 04:31:12.317495    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0624 04:31:12.353252    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0624 04:31:12.360469    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0624 04:31:12.407372    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0624 04:31:12.414108    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0624 04:31:12.448983    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0624 04:31:12.456131    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0624 04:31:12.476513    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:31:12.525761    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:31:12.575029    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:31:12.621707    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:31:12.676117    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0624 04:31:12.732880    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:31:12.786222    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:31:12.836121    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:31:12.890184    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:31:12.939793    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:31:12.990002    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:31:13.037923    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0624 04:31:13.072699    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0624 04:31:13.108149    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0624 04:31:13.145718    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0624 04:31:13.179869    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0624 04:31:13.213206    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0624 04:31:13.246195    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0624 04:31:13.293043    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:31:13.317584    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:31:13.350901    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.358812    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.371262    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.393846    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:31:13.431766    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:31:13.466814    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.474411    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.488241    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.510172    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:31:13.543032    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:31:13.575139    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:31:13.582643    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:31:13.594772    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:31:13.618725    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:31:13.652125    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:31:13.660020    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:31:13.660020    7764 kubeadm.go:928] updating node {m03 172.31.215.46 8443 v1.30.2 docker true true} ...
	I0624 04:31:13.660020    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:31:13.660596    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:31:13.673539    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:31:13.699732    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:31:13.699732    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:31:13.712403    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:31:13.731001    7764 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0624 04:31:13.745744    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0624 04:31:13.763746    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:31:13.763746    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:31:13.778724    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:31:13.778724    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:31:13.780731    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:31:13.786132    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 04:31:13.786132    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0624 04:31:13.826217    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 04:31:13.826623    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0624 04:31:13.826307    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:31:13.842204    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:31:13.884702    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 04:31:13.885229    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0624 04:31:15.237705    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0624 04:31:15.255599    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0624 04:31:15.301892    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:31:15.337163    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0624 04:31:15.387395    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:31:15.394615    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:31:15.433768    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:15.648263    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:31:15.681406    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:31:15.682456    7764 start.go:316] joinCluster: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:31:15.682677    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0624 04:31:15.682784    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:17.926278    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:17.926422    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:17.926422    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:31:20.585612    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:31:20.585612    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:20.586011    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:31:20.819671    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1368265s)
	I0624 04:31:20.819773    7764 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:31:20.819773    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7l95v3.4djr7oozbpugwz2j --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m03 --control-plane --apiserver-advertise-address=172.31.215.46 --apiserver-bind-port=8443"
	I0624 04:32:07.594871    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7l95v3.4djr7oozbpugwz2j --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m03 --control-plane --apiserver-advertise-address=172.31.215.46 --apiserver-bind-port=8443": (46.7749132s)
	I0624 04:32:07.594951    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0624 04:32:08.455240    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000-m03 minikube.k8s.io/updated_at=2024_06_24T04_32_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=false
	I0624 04:32:08.660551    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-340000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0624 04:32:08.820207    7764 start.go:318] duration metric: took 53.1375751s to joinCluster
	I0624 04:32:08.820207    7764 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:32:08.821271    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:32:08.823330    7764 out.go:177] * Verifying Kubernetes components...
	I0624 04:32:08.839918    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:32:09.250611    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:32:09.295045    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:32:09.295461    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0624 04:32:09.295461    7764 kubeadm.go:477] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.219.170:8443
	I0624 04:32:09.296693    7764 node_ready.go:35] waiting up to 6m0s for node "ha-340000-m03" to be "Ready" ...
	I0624 04:32:09.296890    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:09.296890    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:09.296890    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:09.296890    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:09.316809    7764 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0624 04:32:09.807209    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:09.807209    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:09.807325    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:09.807414    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:09.827889    7764 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0624 04:32:10.310982    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:10.310982    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:10.310982    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:10.311277    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:10.315501    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:10.811029    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:10.811029    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:10.811029    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:10.811029    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:10.816499    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:11.303784    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:11.303784    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:11.303784    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:11.303784    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:11.307440    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:11.308932    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:11.802698    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:11.802698    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:11.802698    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:11.802698    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:11.809189    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:32:12.311599    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:12.311599    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:12.311777    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:12.311777    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:12.316594    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:12.803988    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:12.803988    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:12.804304    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:12.804304    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:12.808853    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:13.309582    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:13.309582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:13.309582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:13.309582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:13.362409    7764 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0624 04:32:13.363366    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:13.811415    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:13.811415    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:13.811415    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:13.811632    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:13.818911    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:14.299909    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:14.299974    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:14.299974    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:14.300035    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:14.305244    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:14.798558    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:14.798732    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:14.798732    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:14.798732    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:14.809510    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:15.301559    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:15.301559    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:15.301559    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:15.301559    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:15.304177    7764 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 04:32:15.801642    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:15.801642    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:15.801741    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:15.801741    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:15.807700    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:15.808568    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:16.301725    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:16.301793    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:16.301793    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:16.301793    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:16.306521    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:16.806576    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:16.806576    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:16.806576    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:16.806576    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:16.812265    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:17.307565    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:17.307911    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:17.307911    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:17.307911    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:17.311249    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:17.803195    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:17.803236    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:17.803236    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:17.803236    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:17.829471    7764 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0624 04:32:17.830201    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:18.302566    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:18.302660    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:18.302660    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:18.302704    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:18.307633    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:18.806367    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:18.806719    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:18.806719    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:18.806719    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:18.811372    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.306932    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:19.306932    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.306932    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.306932    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.312601    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:19.811196    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:19.811196    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.811196    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.811196    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.824720    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:32:19.826109    7764 node_ready.go:49] node "ha-340000-m03" has status "Ready":"True"
	I0624 04:32:19.826253    7764 node_ready.go:38] duration metric: took 10.5295184s for node "ha-340000-m03" to be "Ready" ...
	I0624 04:32:19.826253    7764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:32:19.826393    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:19.826465    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.826465    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.826513    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.840491    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:32:19.852526    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.852526    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6xxtk
	I0624 04:32:19.852526    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.852526    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.853077    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.856980    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.858402    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.858402    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.858402    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.858402    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.862234    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.863468    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.863598    7764 pod_ready.go:81] duration metric: took 11.0726ms for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.863598    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.863721    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6zh6m
	I0624 04:32:19.863721    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.863721    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.863721    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.868185    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.869945    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.869945    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.869945    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.869945    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.873539    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.874136    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.874136    7764 pod_ready.go:81] duration metric: took 10.5375ms for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.874136    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.874136    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000
	I0624 04:32:19.874136    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.874136    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.874136    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.878371    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.879517    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.879517    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.879517    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.879636    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.882883    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.883896    7764 pod_ready.go:92] pod "etcd-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.883896    7764 pod_ready.go:81] duration metric: took 9.7602ms for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.883896    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.883896    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m02
	I0624 04:32:19.883896    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.883896    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.883896    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.886306    7764 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 04:32:19.887853    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:19.887912    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.887912    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.887912    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.891833    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.891916    7764 pod_ready.go:92] pod "etcd-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.891916    7764 pod_ready.go:81] duration metric: took 8.0198ms for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.892474    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.013781    7764 request.go:629] Waited for 121.1133ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m03
	I0624 04:32:20.014040    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m03
	I0624 04:32:20.014040    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.014040    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.014040    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.018498    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.215841    7764 request.go:629] Waited for 195.2549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:20.216135    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:20.216185    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.216185    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.216185    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.220391    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.223664    7764 pod_ready.go:92] pod "etcd-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:20.223664    7764 pod_ready.go:81] duration metric: took 331.1891ms for pod "etcd-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.223664    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.422921    7764 request.go:629] Waited for 199.2558ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:32:20.423121    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:32:20.423121    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.423121    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.423121    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.427710    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.624834    7764 request.go:629] Waited for 195.5905ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:20.625055    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:20.625055    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.625055    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.625186    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.630962    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:20.631629    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:20.631692    7764 pod_ready.go:81] duration metric: took 407.9626ms for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.631692    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.813525    7764 request.go:629] Waited for 181.4943ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:32:20.813850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:32:20.813850    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.813850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.813850    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.822295    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:21.019304    7764 request.go:629] Waited for 196.1691ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:21.019582    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:21.019582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.019582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.019582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.024165    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:21.025418    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.025418    7764 pod_ready.go:81] duration metric: took 393.7246ms for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.025418    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.221744    7764 request.go:629] Waited for 196.325ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m03
	I0624 04:32:21.221850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m03
	I0624 04:32:21.221850    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.221850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.222036    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.227692    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:21.424989    7764 request.go:629] Waited for 195.5815ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:21.425129    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:21.425129    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.425129    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.425129    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.432859    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:21.433312    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.433312    7764 pod_ready.go:81] duration metric: took 407.8925ms for pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.433312    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.614461    7764 request.go:629] Waited for 180.962ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:32:21.614461    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:32:21.614461    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.614461    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.614461    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.619998    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:21.816887    7764 request.go:629] Waited for 195.2661ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:21.817439    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:21.817439    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.817439    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.817531    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.821254    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:21.822393    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.822519    7764 pod_ready.go:81] duration metric: took 389.2053ms for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.822519    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.021645    7764 request.go:629] Waited for 199.1257ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:32:22.021826    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:32:22.021933    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.021933    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.021933    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.030013    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:22.225881    7764 request.go:629] Waited for 194.1134ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:22.226204    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:22.226204    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.226204    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.226204    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.230568    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:22.231984    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:22.232054    7764 pod_ready.go:81] duration metric: took 409.5341ms for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.232054    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.414699    7764 request.go:629] Waited for 182.3666ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m03
	I0624 04:32:22.414812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m03
	I0624 04:32:22.414812    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.414812    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.414812    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.423299    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:22.617082    7764 request.go:629] Waited for 192.3023ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:22.617144    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:22.617144    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.617144    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.617144    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.624040    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:32:22.625006    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:22.625006    7764 pod_ready.go:81] duration metric: took 392.95ms for pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.625006    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.821218    7764 request.go:629] Waited for 196.0048ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:32:22.821342    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:32:22.821342    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.821509    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.821509    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.826500    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:23.025313    7764 request.go:629] Waited for 197.089ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:23.025533    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:23.025533    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.025591    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.025591    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.030402    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:23.032250    7764 pod_ready.go:92] pod "kube-proxy-87bnm" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.032333    7764 pod_ready.go:81] duration metric: took 407.3257ms for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.032333    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.215451    7764 request.go:629] Waited for 182.736ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:32:23.215586    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:32:23.215586    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.215586    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.215586    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.220345    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:23.420064    7764 request.go:629] Waited for 198.729ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:23.420201    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:23.420408    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.420408    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.420408    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.427849    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:23.429690    7764 pod_ready.go:92] pod "kube-proxy-jktx8" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.429745    7764 pod_ready.go:81] duration metric: took 397.4104ms for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.429745    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkf7m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.611491    7764 request.go:629] Waited for 181.553ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkf7m
	I0624 04:32:23.611720    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkf7m
	I0624 04:32:23.611720    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.611720    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.611852    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.620711    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:23.826569    7764 request.go:629] Waited for 204.9426ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:23.826799    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:23.826910    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.826910    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.826910    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.832731    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:23.834081    7764 pod_ready.go:92] pod "kube-proxy-xkf7m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.834219    7764 pod_ready.go:81] duration metric: took 404.4722ms for pod "kube-proxy-xkf7m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.834219    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.014787    7764 request.go:629] Waited for 180.3453ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:32:24.014992    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:32:24.015104    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.015104    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.015104    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.019823    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.217322    7764 request.go:629] Waited for 195.5396ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:24.217497    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:24.217582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.217582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.217582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.222070    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.223677    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:24.223677    7764 pod_ready.go:81] duration metric: took 389.4563ms for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.223677    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.420403    7764 request.go:629] Waited for 196.5662ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:32:24.420519    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:32:24.420519    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.420519    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.420519    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.425498    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.623946    7764 request.go:629] Waited for 197.1623ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:24.623946    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:24.623946    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.623946    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.623946    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.632161    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:24.633035    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:24.633035    7764 pod_ready.go:81] duration metric: took 409.2471ms for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.633035    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.826089    7764 request.go:629] Waited for 192.8603ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m03
	I0624 04:32:24.826215    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m03
	I0624 04:32:24.826215    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.826215    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.826307    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.830522    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:25.014446    7764 request.go:629] Waited for 182.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:25.014672    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:25.014723    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.014723    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.014802    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.023559    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:25.024887    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:25.024887    7764 pod_ready.go:81] duration metric: took 391.8496ms for pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:25.024887    7764 pod_ready.go:38] duration metric: took 5.1985782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:32:25.025486    7764 api_server.go:52] waiting for apiserver process to appear ...
	I0624 04:32:25.042426    7764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:32:25.072189    7764 api_server.go:72] duration metric: took 16.2519179s to wait for apiserver process to appear ...
	I0624 04:32:25.072336    7764 api_server.go:88] waiting for apiserver healthz status ...
	I0624 04:32:25.072336    7764 api_server.go:253] Checking apiserver healthz at https://172.31.219.170:8443/healthz ...
	I0624 04:32:25.083072    7764 api_server.go:279] https://172.31.219.170:8443/healthz returned 200:
	ok
	I0624 04:32:25.083830    7764 round_trippers.go:463] GET https://172.31.219.170:8443/version
	I0624 04:32:25.083893    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.083893    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.083944    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.085121    7764 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 04:32:25.085121    7764 api_server.go:141] control plane version: v1.30.2
	I0624 04:32:25.085857    7764 api_server.go:131] duration metric: took 13.5208ms to wait for apiserver health ...
	I0624 04:32:25.085935    7764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 04:32:25.218283    7764 request.go:629] Waited for 132.0554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.218560    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.218560    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.218560    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.218560    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.229110    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:25.239366    7764 system_pods.go:59] 24 kube-system pods found
	I0624 04:32:25.239366    7764 system_pods.go:61] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000-m03" [c5f5b70a-588b-4114-9dd0-e3c4d90979f1] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-8mgnc" [4853ca7d-abd4-4536-b997-660eb300e8bf] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000-m03" [31532987-9531-4a44-9483-5027eee84cdc] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m03" [26530110-2239-496e-889c-aa0bb05a2177] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-xkf7m" [c6f588e9-7459-4d98-a68a-3f0122f834b4] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000-m03" [b82baee9-7ec1-4fb1-91cd-460dacc55291] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:32:25.239953    7764 system_pods.go:61] "kube-vip-ha-340000-m03" [fd2b4f66-bde4-42d8-8c22-dcedac5cadf0] Running
	I0624 04:32:25.239953    7764 system_pods.go:61] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:32:25.239953    7764 system_pods.go:74] duration metric: took 154.0168ms to wait for pod list to return data ...
	I0624 04:32:25.239953    7764 default_sa.go:34] waiting for default service account to be created ...
	I0624 04:32:25.421334    7764 request.go:629] Waited for 181.3463ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:32:25.421334    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:32:25.421334    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.421334    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.421334    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.427249    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:25.427490    7764 default_sa.go:45] found service account: "default"
	I0624 04:32:25.427562    7764 default_sa.go:55] duration metric: took 187.6087ms for default service account to be created ...
	I0624 04:32:25.427617    7764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 04:32:25.612063    7764 request.go:629] Waited for 184.38ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.612234    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.612388    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.612791    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.612791    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.623518    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:25.633399    7764 system_pods.go:86] 24 kube-system pods found
	I0624 04:32:25.633399    7764 system_pods.go:89] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000-m03" [c5f5b70a-588b-4114-9dd0-e3c4d90979f1] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-8mgnc" [4853ca7d-abd4-4536-b997-660eb300e8bf] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000-m03" [31532987-9531-4a44-9483-5027eee84cdc] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m03" [26530110-2239-496e-889c-aa0bb05a2177] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-xkf7m" [c6f588e9-7459-4d98-a68a-3f0122f834b4] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000-m03" [b82baee9-7ec1-4fb1-91cd-460dacc55291] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000-m03" [fd2b4f66-bde4-42d8-8c22-dcedac5cadf0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:32:25.633399    7764 system_pods.go:126] duration metric: took 205.7805ms to wait for k8s-apps to be running ...
	I0624 04:32:25.633399    7764 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 04:32:25.644704    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:32:25.675184    7764 system_svc.go:56] duration metric: took 41.7854ms WaitForService to wait for kubelet
	I0624 04:32:25.675184    7764 kubeadm.go:576] duration metric: took 16.8549102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:32:25.675184    7764 node_conditions.go:102] verifying NodePressure condition ...
	I0624 04:32:25.819734    7764 request.go:629] Waited for 144.4176ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes
	I0624 04:32:25.819813    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes
	I0624 04:32:25.819813    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.819894    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.819954    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.825722    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:105] duration metric: took 152.2153ms to run NodePressure ...
	I0624 04:32:25.827400    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:32:25.827400    7764 start.go:254] writing updated cluster config ...
	I0624 04:32:25.841759    7764 ssh_runner.go:195] Run: rm -f paused
	I0624 04:32:26.004929    7764 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0624 04:32:26.008221    7764 out.go:177] * Done! kubectl is now configured to use "ha-340000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0dab5dcd476f47a30e07c9a16098451d15147ab0d169a4ba10025d366cc49641/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/155582c2f095eaf00f2c023270663657207b1e1d75c73d7bc110ba03729eb826/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674001531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674294132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674717233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.675511336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/833cea563c83c88c2aee77fd8ad46234843a25c0fbc228859bdc9dc7b77572c4/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993201406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993430007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993448707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993749908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.092911226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093324728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093433329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093609129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557299536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557395937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557411837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557674639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:33:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b025a7a92eb76586e6a5922889948f4f0bc62eaae70f359f94dbdcba5eda220c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 24 11:33:07 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:33:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389033270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389399471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389438171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389716372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66537845ba76a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   b025a7a92eb76       busybox-fc5497c4f-mg7l6
	7a761577e341f       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   833cea563c83c       coredns-7db6d8ff4d-6xxtk
	cd348d4e5aabb       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   155582c2f095e       coredns-7db6d8ff4d-6zh6m
	d1ce6ad1d1c36       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   0dab5dcd476f4       storage-provisioner
	907fa20f2449c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   7485bf2f02157       kindnet-k4p7m
	a455e5d79591c       53c535741fb44                                                                                         9 minutes ago        Running             kube-proxy                0                   fb60bddb8bb5f       kube-proxy-jktx8
	846133f35b3bb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   da0ca313de317       kube-vip-ha-340000
	294520b11212a       e874818b3caac                                                                                         10 minutes ago       Running             kube-controller-manager   0                   f22dad9ab27ee       kube-controller-manager-ha-340000
	76c78b3ed83d9       7820c83aa1394                                                                                         10 minutes ago       Running             kube-scheduler            0                   107803efb04ae       kube-scheduler-ha-340000
	3d24fc713d0cd       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   917c33a433524       etcd-ha-340000
	d4dc3f4ed7f8b       56ce0fd9fb532                                                                                         10 minutes ago       Running             kube-apiserver            0                   b74d0615ee4a0       kube-apiserver-ha-340000
	
	
	==> coredns [7a761577e341] <==
	[INFO] 10.244.1.2:56437 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000657s
	[INFO] 10.244.2.2:50732 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148301s
	[INFO] 10.244.2.2:37925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.125148764s
	[INFO] 10.244.2.2:53136 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326101s
	[INFO] 10.244.2.2:47141 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034498028s
	[INFO] 10.244.2.2:49837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121101s
	[INFO] 10.244.0.4:55762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001136s
	[INFO] 10.244.0.4:53102 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001016s
	[INFO] 10.244.0.4:45651 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000976s
	[INFO] 10.244.0.4:34355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128401s
	[INFO] 10.244.1.2:39172 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088s
	[INFO] 10.244.1.2:53752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	[INFO] 10.244.1.2:40644 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001866s
	[INFO] 10.244.2.2:57720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225001s
	[INFO] 10.244.2.2:47121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000685s
	[INFO] 10.244.2.2:33768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000672s
	[INFO] 10.244.0.4:50263 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079s
	[INFO] 10.244.0.4:56311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001967s
	[INFO] 10.244.1.2:46985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276701s
	[INFO] 10.244.1.2:58755 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000695s
	[INFO] 10.244.1.2:59285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000777s
	[INFO] 10.244.2.2:33498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109801s
	[INFO] 10.244.0.4:60901 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001442s
	[INFO] 10.244.0.4:48052 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000230901s
	[INFO] 10.244.1.2:46845 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101201s
	
	
	==> coredns [cd348d4e5aab] <==
	[INFO] 10.244.1.2:54548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000901904s
	[INFO] 10.244.2.2:34605 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001461s
	[INFO] 10.244.2.2:45784 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001423s
	[INFO] 10.244.2.2:47857 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001039s
	[INFO] 10.244.0.4:51969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012272046s
	[INFO] 10.244.0.4:53245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188801s
	[INFO] 10.244.0.4:39298 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023027385s
	[INFO] 10.244.0.4:50860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000684s
	[INFO] 10.244.1.2:35217 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193301s
	[INFO] 10.244.1.2:43043 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068s
	[INFO] 10.244.1.2:56637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076101s
	[INFO] 10.244.1.2:57783 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252001s
	[INFO] 10.244.1.2:41276 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000099501s
	[INFO] 10.244.2.2:52577 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085401s
	[INFO] 10.244.0.4:43320 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001947s
	[INFO] 10.244.0.4:47744 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130501s
	[INFO] 10.244.1.2:41866 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000749s
	[INFO] 10.244.2.2:55690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000300902s
	[INFO] 10.244.2.2:37854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158801s
	[INFO] 10.244.2.2:34018 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001014s
	[INFO] 10.244.0.4:44130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210901s
	[INFO] 10.244.0.4:53619 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131101s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000278202s
	[INFO] 10.244.1.2:40590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001534s
	[INFO] 10.244.1.2:51259 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000844s
	
	
	==> describe nodes <==
	Name:               ha-340000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T04_24_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:24:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:34:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:33:23 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:33:23 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:33:23 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:33:23 +0000   Mon, 24 Jun 2024 11:24:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.219.170
	  Hostname:    ha-340000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dc1d6ea23974cf3bc55999d63a14514
	  System UUID:                fa1eb7b0-0abc-5149-a08c-a27e05d5426a
	  Boot ID:                    a193d5a8-20d3-444f-b9d5-f391ed40c2ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mg7l6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-6xxtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 coredns-7db6d8ff4d-6zh6m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 etcd-ha-340000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                 kindnet-k4p7m                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-ha-340000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-controller-manager-ha-340000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-proxy-jktx8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-ha-340000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-vip-ha-340000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m41s              kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-340000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-340000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-340000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m57s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m57s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m57s              kubelet          Node ha-340000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s              kubelet          Node ha-340000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s              kubelet          Node ha-340000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m45s              node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	  Normal  NodeReady                9m30s              kubelet          Node ha-340000 status is now: NodeReady
	  Normal  RegisteredNode           5m44s              node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	  Normal  RegisteredNode           109s               node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	
	
	Name:               ha-340000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T04_28_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:28:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:34:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:33:43 +0000   Mon, 24 Jun 2024 11:28:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:33:43 +0000   Mon, 24 Jun 2024 11:28:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:33:43 +0000   Mon, 24 Jun 2024 11:28:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:33:43 +0000   Mon, 24 Jun 2024 11:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.216.99
	  Hostname:    ha-340000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5aeac87c1e14cd39bb8892cd5382f7b
	  System UUID:                4f29bf5a-5b58-9940-b557-7ea78cd09aaa
	  Boot ID:                    0b07c24a-5e00-4ef1-b25b-a44b3f20cf09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rrqj8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-340000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-rmfdg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-340000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-ha-340000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-87bnm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-340000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-vip-ha-340000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m58s                kube-proxy       
	  Normal  RegisteredNode           6m5s                 node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-340000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-340000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-340000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m44s                node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	  Normal  RegisteredNode           109s                 node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	
	
	Name:               ha-340000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T04_32_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:32:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:34:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:33:33 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:33:33 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:33:33 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:33:33 +0000   Mon, 24 Jun 2024 11:32:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.46
	  Hostname:    ha-340000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 73794d89abea4893be9ddfc306311730
	  System UUID:                b9d53c05-73eb-4c4b-9b21-878922a12b5a
	  Boot ID:                    9bebd6fa-8629-4a52-99a5-4216403a6bb4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lsn8j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-340000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m7s
	  kube-system                 kindnet-8mgnc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m11s
	  kube-system                 kube-apiserver-ha-340000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-controller-manager-ha-340000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-proxy-xkf7m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-scheduler-ha-340000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-vip-ha-340000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node ha-340000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node ha-340000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node ha-340000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m10s                  node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	
	
	==> dmesg <==
	[  +1.728474] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.077611] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun24 11:23] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.191839] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.696754] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.101348] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.539188] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.189520] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.229470] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.774971] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.221863] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.195680] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.266531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.086877] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.104097] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.034404] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[Jun24 11:24] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[  +0.103839] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.759776] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.814579] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[ +15.286113] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.307080] kauditd_printk_skb: 29 callbacks suppressed
	[Jun24 11:28] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3d24fc713d0c] <==
	{"level":"warn","ts":"2024-06-24T11:32:01.386619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.366153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-24T11:32:01.390539Z","caller":"traceutil/trace.go:171","msg":"trace[128528394] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1492; }","duration":"105.340782ms","start":"2024-06-24T11:32:01.285186Z","end":"2024-06-24T11:32:01.390527Z","steps":["trace[128528394] 'agreement among raft nodes before linearized reading'  (duration: 101.377153ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T11:32:01.415832Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-24T11:32:01.627755Z","caller":"traceutil/trace.go:171","msg":"trace[1751472782] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"142.892862ms","start":"2024-06-24T11:32:01.484768Z","end":"2024-06-24T11:32:01.627661Z","steps":["trace[1751472782] 'process raft request'  (duration: 50.406274ms)","trace[1751472782] 'compare'  (duration: 92.370487ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-24T11:32:02.402027Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-24T11:32:03.4047Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-24T11:32:03.923903Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"36ce2f3d11da13c5"}
	{"level":"info","ts":"2024-06-24T11:32:03.92645Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"41766fee91dd9d05","remote-peer-id":"36ce2f3d11da13c5"}
	{"level":"info","ts":"2024-06-24T11:32:03.926635Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"41766fee91dd9d05","remote-peer-id":"36ce2f3d11da13c5"}
	{"level":"info","ts":"2024-06-24T11:32:03.983147Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"41766fee91dd9d05","to":"36ce2f3d11da13c5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-24T11:32:03.983267Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"41766fee91dd9d05","remote-peer-id":"36ce2f3d11da13c5"}
	{"level":"warn","ts":"2024-06-24T11:32:04.082207Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.31.215.46:33672","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-24T11:32:04.084537Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"41766fee91dd9d05","to":"36ce2f3d11da13c5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-24T11:32:04.084577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"41766fee91dd9d05","remote-peer-id":"36ce2f3d11da13c5"}
	{"level":"warn","ts":"2024-06-24T11:32:04.403175Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-24T11:32:05.403839Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-24T11:32:06.412666Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"36ce2f3d11da13c5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-24T11:32:07.410677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41766fee91dd9d05 switched to configuration voters=(3008064665017579607 3949145862589518789 4717080730157292805)"}
	{"level":"info","ts":"2024-06-24T11:32:07.410817Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e2502b32b4291fef","local-member-id":"41766fee91dd9d05"}
	{"level":"info","ts":"2024-06-24T11:32:07.411051Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"41766fee91dd9d05","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"36ce2f3d11da13c5"}
	{"level":"warn","ts":"2024-06-24T11:32:08.528237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.48224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-24T11:32:08.528315Z","caller":"traceutil/trace.go:171","msg":"trace[1319023945] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1568; }","duration":"247.645441ms","start":"2024-06-24T11:32:08.280646Z","end":"2024-06-24T11:32:08.528291Z","steps":["trace[1319023945] 'range keys from in-memory index tree'  (duration: 245.931829ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T11:34:07.811407Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1084}
	{"level":"info","ts":"2024-06-24T11:34:07.951306Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1084,"took":"139.119253ms","hash":441307404,"current-db-size-bytes":3686400,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2207744,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-24T11:34:07.951704Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":441307404,"revision":1084,"compact-revision":-1}
	
	
	==> kernel <==
	 11:34:11 up 12 min,  0 users,  load average: 0.32, 0.33, 0.22
	Linux ha-340000 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [907fa20f2449] <==
	I0624 11:33:28.283724       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:33:38.291934       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:33:38.292020       1 main.go:227] handling current node
	I0624 11:33:38.292035       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:33:38.292043       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:33:38.292564       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:33:38.292653       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:33:48.308847       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:33:48.308991       1 main.go:227] handling current node
	I0624 11:33:48.309008       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:33:48.309016       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:33:48.309495       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:33:48.309577       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:33:58.320009       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:33:58.320232       1 main.go:227] handling current node
	I0624 11:33:58.320250       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:33:58.320258       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:33:58.320696       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:33:58.320999       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:34:08.329639       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:34:08.329778       1 main.go:227] handling current node
	I0624 11:34:08.329794       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:34:08.329802       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:34:08.330463       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:34:08.330540       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d4dc3f4ed7f8] <==
	I0624 11:24:12.613288       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0624 11:24:12.630403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.219.170]
	I0624 11:24:12.631496       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 11:24:12.642974       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 11:24:13.211603       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 11:24:14.557259       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 11:24:14.584316       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0624 11:24:14.609616       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 11:24:27.297559       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0624 11:24:27.419436       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0624 11:33:12.629777       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63056: use of closed network connection
	E0624 11:33:14.142329       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63058: use of closed network connection
	E0624 11:33:14.598550       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63060: use of closed network connection
	E0624 11:33:15.202478       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63062: use of closed network connection
	E0624 11:33:15.691173       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63064: use of closed network connection
	E0624 11:33:16.156634       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63066: use of closed network connection
	E0624 11:33:16.599773       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63068: use of closed network connection
	E0624 11:33:17.052178       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63070: use of closed network connection
	E0624 11:33:17.486498       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63072: use of closed network connection
	E0624 11:33:18.262869       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63075: use of closed network connection
	E0624 11:33:28.714962       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63077: use of closed network connection
	E0624 11:33:29.157751       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63080: use of closed network connection
	E0624 11:33:39.608619       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63082: use of closed network connection
	E0624 11:33:40.048803       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63084: use of closed network connection
	E0624 11:33:50.491713       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63086: use of closed network connection
	
	
	==> kube-controller-manager [294520b11212] <==
	I0624 11:28:06.741349       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-340000-m02"
	I0624 11:32:00.807444       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-340000-m03\" does not exist"
	I0624 11:32:00.876968       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-340000-m03" podCIDRs=["10.244.2.0/24"]
	I0624 11:32:01.819540       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-340000-m03"
	I0624 11:33:04.533843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.329543ms"
	I0624 11:33:04.791863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.585324ms"
	I0624 11:33:05.073379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="281.371302ms"
	I0624 11:33:05.254900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.165453ms"
	E0624 11:33:05.255196       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:05.556417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="301.10805ms"
	E0624 11:33:05.556471       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:05.556546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.3µs"
	I0624 11:33:05.566665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.801µs"
	I0624 11:33:06.087277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.7µs"
	I0624 11:33:06.113340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.2µs"
	I0624 11:33:06.134170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36µs"
	I0624 11:33:07.090587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.3µs"
	I0624 11:33:07.154959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.9µs"
	I0624 11:33:07.255605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.901µs"
	I0624 11:33:07.982807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.626422ms"
	E0624 11:33:07.982866       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:07.983166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.201µs"
	I0624 11:33:07.988686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.2µs"
	I0624 11:33:10.171782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.184366ms"
	I0624 11:33:10.172410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="559.202µs"
	
	
	==> kube-proxy [a455e5d79591] <==
	I0624 11:24:30.038976       1 server_linux.go:69] "Using iptables proxy"
	I0624 11:24:30.073333       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.219.170"]
	I0624 11:24:30.226639       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 11:24:30.226783       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 11:24:30.226808       1 server_linux.go:165] "Using iptables Proxier"
	I0624 11:24:30.231323       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 11:24:30.231875       1 server.go:872] "Version info" version="v1.30.2"
	I0624 11:24:30.232064       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 11:24:30.233934       1 config.go:192] "Starting service config controller"
	I0624 11:24:30.234316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 11:24:30.234538       1 config.go:101] "Starting endpoint slice config controller"
	I0624 11:24:30.235010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 11:24:30.236029       1 config.go:319] "Starting node config controller"
	I0624 11:24:30.236427       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 11:24:30.334959       1 shared_informer.go:320] Caches are synced for service config
	I0624 11:24:30.336429       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 11:24:30.336894       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76c78b3ed83d] <==
	W0624 11:24:11.328854       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 11:24:11.328913       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0624 11:24:11.349387       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0624 11:24:11.349520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0624 11:24:11.419840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0624 11:24:11.420754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0624 11:24:11.421144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 11:24:11.421246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 11:24:11.458218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 11:24:11.458286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 11:24:11.556592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0624 11:24:11.556731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0624 11:24:11.556808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0624 11:24:11.556849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0624 11:24:11.571252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0624 11:24:11.571280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0624 11:24:11.590878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0624 11:24:11.591210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0624 11:24:11.794875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0624 11:24:11.796681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 11:24:14.162430       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0624 11:33:04.472945       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lsn8j\": pod busybox-fc5497c4f-lsn8j is already assigned to node \"ha-340000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lsn8j" node="ha-340000-m03"
	E0624 11:33:04.477564       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f271f626-6a96-4a53-8b97-32e461250473(default/busybox-fc5497c4f-lsn8j) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lsn8j"
	E0624 11:33:04.477674       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lsn8j\": pod busybox-fc5497c4f-lsn8j is already assigned to node \"ha-340000-m03\"" pod="default/busybox-fc5497c4f-lsn8j"
	I0624 11:33:04.477714       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lsn8j" node="ha-340000-m03"
	
	
	==> kubelet <==
	Jun 24 11:29:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:29:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:30:14 ha-340000 kubelet[2212]: E0624 11:30:14.711609    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:30:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:30:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:30:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:30:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:31:14 ha-340000 kubelet[2212]: E0624 11:31:14.712250    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:31:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:31:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:31:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:31:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:32:14 ha-340000 kubelet[2212]: E0624 11:32:14.705830    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:32:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:32:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:32:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:32:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:33:04 ha-340000 kubelet[2212]: I0624 11:33:04.526949    2212 topology_manager.go:215] "Topology Admit Handler" podUID="7d08204d-eb05-49a9-ba36-8181b9a4f19a" podNamespace="default" podName="busybox-fc5497c4f-mg7l6"
	Jun 24 11:33:04 ha-340000 kubelet[2212]: I0624 11:33:04.665384    2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmvk\" (UniqueName: \"kubernetes.io/projected/7d08204d-eb05-49a9-ba36-8181b9a4f19a-kube-api-access-kbmvk\") pod \"busybox-fc5497c4f-mg7l6\" (UID: \"7d08204d-eb05-49a9-ba36-8181b9a4f19a\") " pod="default/busybox-fc5497c4f-mg7l6"
	Jun 24 11:33:05 ha-340000 kubelet[2212]: I0624 11:33:05.795614    2212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b025a7a92eb76586e6a5922889948f4f0bc62eaae70f359f94dbdcba5eda220c"
	Jun 24 11:33:14 ha-340000 kubelet[2212]: E0624 11:33:14.707249    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:33:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:33:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:33:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:33:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:34:03.328854    9968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-340000 -n ha-340000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-340000 -n ha-340000: (12.5857677s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-340000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (104.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 node stop m02 -v=7 --alsologtostderr: (35.0275598s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr: exit status 1 (34.4322892s)

                                                
                                                
** stderr ** 
	W0624 04:50:30.009473    8896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0624 04:50:30.022024    8896 out.go:291] Setting OutFile to fd 724 ...
	I0624 04:50:30.023296    8896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:50:30.023296    8896 out.go:304] Setting ErrFile to fd 692...
	I0624 04:50:30.023296    8896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:50:30.044351    8896 out.go:298] Setting JSON to false
	I0624 04:50:30.044351    8896 mustload.go:65] Loading cluster: ha-340000
	I0624 04:50:30.044351    8896 notify.go:220] Checking for updates...
	I0624 04:50:30.045676    8896 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:50:30.045781    8896 status.go:255] checking status of ha-340000 ...
	I0624 04:50:30.047071    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:50:32.278850    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:32.278850    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:32.278850    8896 status.go:330] ha-340000 host status = "Running" (err=<nil>)
	I0624 04:50:32.278850    8896 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:50:32.282491    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:50:34.477872    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:34.488955    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:34.488955    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:50:37.108747    8896 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:50:37.108747    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:37.108747    8896 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:50:37.137079    8896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0624 04:50:37.139628    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:50:39.392378    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:39.404027    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:39.404382    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:50:42.000973    8896 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:50:42.000973    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:42.001691    8896 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:50:42.102930    8896 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9631593s)
	I0624 04:50:42.116625    8896 ssh_runner.go:195] Run: systemctl --version
	I0624 04:50:42.145596    8896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:50:42.174558    8896 kubeconfig.go:125] found "ha-340000" server: "https://172.31.223.254:8443"
	I0624 04:50:42.174723    8896 api_server.go:166] Checking apiserver status ...
	I0624 04:50:42.187369    8896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:50:42.227414    8896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1997/cgroup
	W0624 04:50:42.239064    8896 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1997/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0624 04:50:42.261115    8896 ssh_runner.go:195] Run: ls
	I0624 04:50:42.271655    8896 api_server.go:253] Checking apiserver healthz at https://172.31.223.254:8443/healthz ...
	I0624 04:50:42.281457    8896 api_server.go:279] https://172.31.223.254:8443/healthz returned 200:
	ok
	I0624 04:50:42.281457    8896 status.go:422] ha-340000 apiserver status = Running (err=<nil>)
	I0624 04:50:42.281457    8896 status.go:257] ha-340000 status: &{Name:ha-340000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0624 04:50:42.281457    8896 status.go:255] checking status of ha-340000-m02 ...
	I0624 04:50:42.283039    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:50:44.438089    8896 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 04:50:44.438214    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:44.438258    8896 status.go:330] ha-340000-m02 host status = "Stopped" (err=<nil>)
	I0624 04:50:44.438258    8896 status.go:343] host is not running, skipping remaining checks
	I0624 04:50:44.438258    8896 status.go:257] ha-340000-m02 status: &{Name:ha-340000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0624 04:50:44.438433    8896 status.go:255] checking status of ha-340000-m03 ...
	I0624 04:50:44.439748    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:50:46.652564    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:46.652564    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:46.663392    8896 status.go:330] ha-340000-m03 host status = "Running" (err=<nil>)
	I0624 04:50:46.663392    8896 host.go:66] Checking if "ha-340000-m03" exists ...
	I0624 04:50:46.664225    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:50:48.872682    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:48.872682    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:48.872682    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:50:51.402449    8896 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:50:51.402449    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:51.402449    8896 host.go:66] Checking if "ha-340000-m03" exists ...
	I0624 04:50:51.413092    8896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0624 04:50:51.413092    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:50:53.581802    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:53.581802    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:53.581802    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:50:56.127511    8896 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:50:56.138451    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:56.138613    8896 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:50:56.234828    8896 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8217167s)
	I0624 04:50:56.247228    8896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:50:56.275918    8896 kubeconfig.go:125] found "ha-340000" server: "https://172.31.223.254:8443"
	I0624 04:50:56.275974    8896 api_server.go:166] Checking apiserver status ...
	I0624 04:50:56.287016    8896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:50:56.329685    8896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2207/cgroup
	W0624 04:50:56.346694    8896 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0624 04:50:56.360764    8896 ssh_runner.go:195] Run: ls
	I0624 04:50:56.368018    8896 api_server.go:253] Checking apiserver healthz at https://172.31.223.254:8443/healthz ...
	I0624 04:50:56.375027    8896 api_server.go:279] https://172.31.223.254:8443/healthz returned 200:
	ok
	I0624 04:50:56.379063    8896 status.go:422] ha-340000-m03 apiserver status = Running (err=<nil>)
	I0624 04:50:56.379063    8896 status.go:257] ha-340000-m03 status: &{Name:ha-340000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0624 04:50:56.379063    8896 status.go:255] checking status of ha-340000-m04 ...
	I0624 04:50:56.379159    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m04 ).state
	I0624 04:50:58.499413    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:50:58.499413    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:50:58.499413    8896 status.go:330] ha-340000-m04 host status = "Running" (err=<nil>)
	I0624 04:50:58.499413    8896 host.go:66] Checking if "ha-340000-m04" exists ...
	I0624 04:50:58.500470    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m04 ).state
	I0624 04:51:00.614277    8896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:51:00.625157    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:51:00.625157    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m04 ).networkadapters[0]).ipaddresses[0]
	I0624 04:51:03.128580    8896 main.go:141] libmachine: [stdout =====>] : 172.31.222.135
	
	I0624 04:51:03.128580    8896 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:51:03.139965    8896 host.go:66] Checking if "ha-340000-m04" exists ...
	I0624 04:51:03.155286    8896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0624 04:51:03.155286    8896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m04 ).state

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-340000 -n ha-340000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-340000 -n ha-340000: (12.1152652s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 logs -n 25: (8.5920036s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:45 PDT | 24 Jun 24 04:45 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:45 PDT | 24 Jun 24 04:45 PDT |
	|         | ha-340000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:45 PDT | 24 Jun 24 04:45 PDT |
	|         | ha-340000:/home/docker/cp-test_ha-340000-m03_ha-340000.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:45 PDT | 24 Jun 24 04:46 PDT |
	|         | ha-340000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000 sudo cat                                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:46 PDT | 24 Jun 24 04:46 PDT |
	|         | /home/docker/cp-test_ha-340000-m03_ha-340000.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:46 PDT | 24 Jun 24 04:46 PDT |
	|         | ha-340000-m02:/home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:46 PDT | 24 Jun 24 04:46 PDT |
	|         | ha-340000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000-m02 sudo cat                                                                                  | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:46 PDT | 24 Jun 24 04:46 PDT |
	|         | /home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:46 PDT | 24 Jun 24 04:47 PDT |
	|         | ha-340000-m04:/home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:47 PDT |
	|         | ha-340000-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000-m04 sudo cat                                                                                  | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:47 PDT |
	|         | /home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-340000 cp testdata\cp-test.txt                                                                                        | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:47 PDT |
	|         | ha-340000-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:47 PDT |
	|         | ha-340000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:47 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:47 PDT | 24 Jun 24 04:48 PDT |
	|         | ha-340000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:48 PDT | 24 Jun 24 04:48 PDT |
	|         | ha-340000:/home/docker/cp-test_ha-340000-m04_ha-340000.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:48 PDT | 24 Jun 24 04:48 PDT |
	|         | ha-340000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000 sudo cat                                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:48 PDT | 24 Jun 24 04:48 PDT |
	|         | /home/docker/cp-test_ha-340000-m04_ha-340000.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:48 PDT | 24 Jun 24 04:48 PDT |
	|         | ha-340000-m02:/home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:48 PDT | 24 Jun 24 04:49 PDT |
	|         | ha-340000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000-m02 sudo cat                                                                                  | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:49 PDT | 24 Jun 24 04:49 PDT |
	|         | /home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt                                                                      | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:49 PDT | 24 Jun 24 04:49 PDT |
	|         | ha-340000-m03:/home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n                                                                                                         | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:49 PDT | 24 Jun 24 04:49 PDT |
	|         | ha-340000-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-340000 ssh -n ha-340000-m03 sudo cat                                                                                  | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:49 PDT | 24 Jun 24 04:49 PDT |
	|         | /home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-340000 node stop m02 -v=7                                                                                             | ha-340000 | minikube1\jenkins | v1.33.1 | 24 Jun 24 04:49 PDT | 24 Jun 24 04:50 PDT |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 04:21:04
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 04:21:04.440454    7764 out.go:291] Setting OutFile to fd 372 ...
	I0624 04:21:04.441412    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:21:04.441412    7764 out.go:304] Setting ErrFile to fd 792...
	I0624 04:21:04.441614    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 04:21:04.468985    7764 out.go:298] Setting JSON to false
	I0624 04:21:04.471719    7764 start.go:129] hostinfo: {"hostname":"minikube1","uptime":18519,"bootTime":1719209544,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 04:21:04.472731    7764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 04:21:04.480371    7764 out.go:177] * [ha-340000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 04:21:04.484324    7764 notify.go:220] Checking for updates...
	I0624 04:21:04.486941    7764 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:21:04.489306    7764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 04:21:04.491459    7764 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 04:21:04.493396    7764 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 04:21:04.497092    7764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 04:21:04.500940    7764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 04:21:09.916557    7764 out.go:177] * Using the hyperv driver based on user configuration
	I0624 04:21:09.920604    7764 start.go:297] selected driver: hyperv
	I0624 04:21:09.920773    7764 start.go:901] validating driver "hyperv" against <nil>
	I0624 04:21:09.920773    7764 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 04:21:09.969689    7764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 04:21:09.971001    7764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:21:09.971001    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:21:09.971001    7764 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0624 04:21:09.971001    7764 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 04:21:09.971001    7764 start.go:340] cluster config:
	{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:21:09.971584    7764 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 04:21:09.976713    7764 out.go:177] * Starting "ha-340000" primary control-plane node in "ha-340000" cluster
	I0624 04:21:09.982129    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:21:09.982369    7764 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 04:21:09.982467    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:21:09.982805    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:21:09.982805    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:21:09.983385    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:21:09.983385    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json: {Name:mk5bcae1e9566ffb94b611ccf4e4863330a7bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:21:09.984755    7764 start.go:360] acquireMachinesLock for ha-340000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:21:09.984755    7764 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-340000"
	I0624 04:21:09.984755    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:21:09.984755    7764 start.go:125] createHost starting for "" (driver="hyperv")
	I0624 04:21:09.988253    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:21:09.989141    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:21:09.989141    7764 client.go:168] LocalClient.Create starting
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:21:09.989427    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:21:09.990193    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:21:09.990392    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:21:09.990392    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:21:09.990580    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:21:12.033672    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:21:12.033672    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:12.033803    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:21:13.712611    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:21:13.712819    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:13.712819    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:21:15.173895    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:21:15.174941    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:15.175134    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:21:18.728972    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:21:18.729242    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:18.731483    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:21:19.245672    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:21:19.686943    7764 main.go:141] libmachine: Creating VM...
	I0624 04:21:19.686943    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:22.569717    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:21:22.569717    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:21:24.339286    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:21:24.339286    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:24.339568    7764 main.go:141] libmachine: Creating VHD
	I0624 04:21:24.339568    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:21:28.188000    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6FA1222A-B8A3-4B00-8259-E96C762FA31D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:21:28.188088    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:28.188088    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:21:28.188172    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:21:28.196886    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:21:31.381741    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:31.381741    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:31.382614    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd' -SizeBytes 20000MB
	I0624 04:21:33.876048    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:33.876048    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:33.876472    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:21:37.492970    7764 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-340000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:21:37.492970    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:37.493944    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000 -DynamicMemoryEnabled $false
	I0624 04:21:39.733419    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:39.734365    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:39.734365    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000 -Count 2
	I0624 04:21:41.926777    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:41.926983    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:41.927071    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\boot2docker.iso'
	I0624 04:21:44.518588    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:44.518588    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:44.518811    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\disk.vhd'
	I0624 04:21:47.186210    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:47.186413    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:47.186413    7764 main.go:141] libmachine: Starting VM...
	I0624 04:21:47.186525    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000
	I0624 04:21:50.225145    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:50.225145    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:50.225330    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:21:50.225368    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:52.491626    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:21:55.042169    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:21:55.042617    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:56.056313    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:21:58.291501    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:21:58.291501    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:21:58.291927    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:00.837856    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:00.837856    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:01.851924    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:04.099467    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:04.099467    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:04.099836    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:06.625186    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:06.625498    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:07.631560    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:09.812566    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:09.812640    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:09.812834    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:12.338212    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:22:12.339008    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:13.345133    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:15.624625    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:18.157966    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:20.340082    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:20.340775    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:20.340775    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:22:20.340938    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:22.519783    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:22.519993    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:22.519993    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:25.084843    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:25.084843    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:25.091631    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:25.102901    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:25.102901    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:22:25.243672    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:22:25.243759    7764 buildroot.go:166] provisioning hostname "ha-340000"
	I0624 04:22:25.243889    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:27.356736    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:29.917599    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:29.917684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:29.925952    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:29.926668    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:29.926668    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000 && echo "ha-340000" | sudo tee /etc/hostname
	I0624 04:22:30.097272    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000
	
	I0624 04:22:30.097272    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:32.279403    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:32.279403    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:32.279684    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:34.880756    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:34.880997    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:34.886216    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:34.886431    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:34.886431    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:22:35.042971    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:22:35.042971    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:22:35.042971    7764 buildroot.go:174] setting up certificates
	I0624 04:22:35.042971    7764 provision.go:84] configureAuth start
	I0624 04:22:35.042971    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:37.199678    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:37.199678    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:37.200116    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:39.778099    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:41.987549    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:41.987743    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:41.987875    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:44.604967    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:44.605027    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:44.605027    7764 provision.go:143] copyHostCerts
	I0624 04:22:44.605027    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:22:44.605027    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:22:44.605027    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:22:44.605790    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:22:44.606959    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:22:44.607324    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:22:44.607324    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:22:44.607731    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:22:44.608681    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:22:44.608935    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:22:44.608935    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:22:44.608935    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:22:44.610235    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000 san=[127.0.0.1 172.31.219.170 ha-340000 localhost minikube]
	I0624 04:22:45.018783    7764 provision.go:177] copyRemoteCerts
	I0624 04:22:45.037552    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:22:45.037552    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:47.202779    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:47.203671    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:47.203671    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:49.806003    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:49.806250    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:49.806250    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:22:49.923562    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8859914s)
	I0624 04:22:49.923562    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:22:49.924207    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:22:49.970115    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:22:49.970666    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:22:50.017371    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:22:50.017371    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0624 04:22:50.067323    7764 provision.go:87] duration metric: took 15.0242947s to configureAuth
	I0624 04:22:50.067323    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:22:50.068297    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:22:50.068444    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:52.216942    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:52.217239    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:52.217239    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:54.777665    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:54.778687    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:54.787038    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:54.787739    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:54.787739    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:22:54.925775    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:22:54.925775    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:22:54.925775    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:22:54.925775    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:22:57.101646    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:22:57.102006    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:57.102157    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:22:59.628474    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:22:59.628640    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:22:59.634236    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:22:59.634236    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:22:59.634858    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:22:59.809917    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:22:59.809917    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:01.965133    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:04.562670    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:04.562984    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:04.569453    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:04.569618    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:04.569618    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:23:06.793171    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:23:06.793171    7764 machine.go:97] duration metric: took 46.45222s to provisionDockerMachine
	I0624 04:23:06.793171    7764 client.go:171] duration metric: took 1m56.8035863s to LocalClient.Create
	I0624 04:23:06.793171    7764 start.go:167] duration metric: took 1m56.8035863s to libmachine.API.Create "ha-340000"
	I0624 04:23:06.793171    7764 start.go:293] postStartSetup for "ha-340000" (driver="hyperv")
	I0624 04:23:06.793171    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:23:06.806161    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:23:06.806161    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:08.926143    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:08.926143    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:08.926377    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:11.466322    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:11.466322    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:11.467489    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:11.587250    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7810151s)
	I0624 04:23:11.600522    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:23:11.608510    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:23:11.608623    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:23:11.609267    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:23:11.610539    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:23:11.610539    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:23:11.622168    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:23:11.642744    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:23:11.691185    7764 start.go:296] duration metric: took 4.8979958s for postStartSetup
	I0624 04:23:11.694303    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:13.871692    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:13.872159    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:13.872159    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:16.477510    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:16.478115    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:16.478346    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:23:16.481107    7764 start.go:128] duration metric: took 2m6.495872s to createHost
	I0624 04:23:16.481306    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:18.621095    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:21.198075    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:21.198075    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:21.203031    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:21.203711    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:21.203711    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:23:21.335720    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228201.341230144
	
	I0624 04:23:21.335720    7764 fix.go:216] guest clock: 1719228201.341230144
	I0624 04:23:21.335720    7764 fix.go:229] Guest: 2024-06-24 04:23:21.341230144 -0700 PDT Remote: 2024-06-24 04:23:16.4812468 -0700 PDT m=+132.145167801 (delta=4.859983344s)
	I0624 04:23:21.335720    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:23.513971    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:23.514115    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:23.514115    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:26.064808    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:26.065655    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:26.071109    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:23:26.072117    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.219.170 22 <nil> <nil>}
	I0624 04:23:26.072117    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228201
	I0624 04:23:26.223364    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:23:21 UTC 2024
	
	I0624 04:23:26.223364    7764 fix.go:236] clock set: Mon Jun 24 11:23:21 UTC 2024
	 (err=<nil>)
	I0624 04:23:26.223364    7764 start.go:83] releasing machines lock for "ha-340000", held for 2m16.2380929s
	I0624 04:23:26.223364    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:28.370242    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:30.938259    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:30.938259    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:30.943876    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:23:30.943876    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:30.953669    7764 ssh_runner.go:195] Run: cat /version.json
	I0624 04:23:30.953669    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:23:33.194566    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:23:33.194922    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:33.195066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:33.195066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:23:35.952934    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:35.952934    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:35.953511    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:35.972815    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:23:35.972815    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:23:35.972815    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:23:36.049516    7764 ssh_runner.go:235] Completed: cat /version.json: (5.0958285s)
	I0624 04:23:36.062876    7764 ssh_runner.go:195] Run: systemctl --version
	I0624 04:23:36.126207    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1823117s)
	I0624 04:23:36.138544    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0624 04:23:36.147541    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:23:36.158992    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:23:36.186226    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:23:36.186226    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:23:36.186688    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:23:36.234270    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:23:36.268492    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:23:36.287362    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:23:36.298844    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:23:36.334613    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:23:36.363421    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:23:36.394789    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:23:36.429368    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:23:36.461842    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:23:36.501126    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:23:36.536096    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:23:36.571166    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:23:36.602701    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:23:36.633777    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:36.831442    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:23:36.864134    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:23:36.876552    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:23:36.915498    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:23:36.950864    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:23:36.989119    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:23:37.025144    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:23:37.064005    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:23:37.128646    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:23:37.151027    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:23:37.195115    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:23:37.214468    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:23:37.232424    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:23:37.275601    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:23:37.475078    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:23:37.650885    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:23:37.651141    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:23:37.700501    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:37.884853    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:23:40.393713    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5088508s)
	I0624 04:23:40.405828    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:23:40.441287    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:23:40.474575    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:23:40.686866    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:23:40.889847    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:41.085222    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:23:41.126761    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:23:41.160981    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:41.330340    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:23:41.432864    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:23:41.447686    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:23:41.457333    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:23:41.468396    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:23:41.485785    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:23:41.541304    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:23:41.549992    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:23:41.592759    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:23:41.632360    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:23:41.632455    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:23:41.636049    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:23:41.639061    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:23:41.639061    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:23:41.651675    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:23:41.656963    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:23:41.690104    7764 kubeadm.go:877] updating cluster {Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 04:23:41.690317    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:23:41.699038    7764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 04:23:41.719639    7764 docker.go:685] Got preloaded images: 
	I0624 04:23:41.719639    7764 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0624 04:23:41.733682    7764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 04:23:41.763470    7764 ssh_runner.go:195] Run: which lz4
	I0624 04:23:41.769903    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0624 04:23:41.783414    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 04:23:41.789824    7764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 04:23:41.789824    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0624 04:23:43.609061    7764 docker.go:649] duration metric: took 1.8391507s to copy over tarball
	I0624 04:23:43.622011    7764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 04:23:52.086436    7764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4643447s)
	I0624 04:23:52.086436    7764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 04:23:52.148557    7764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 04:23:52.174937    7764 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0624 04:23:52.226917    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:52.437942    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:23:56.005781    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.5677733s)
	I0624 04:23:56.017780    7764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 04:23:56.042635    7764 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 04:23:56.042635    7764 cache_images.go:84] Images are preloaded, skipping loading
	I0624 04:23:56.042773    7764 kubeadm.go:928] updating node { 172.31.219.170 8443 v1.30.2 docker true true} ...
	I0624 04:23:56.043059    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.219.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:23:56.057045    7764 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 04:23:56.094439    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:23:56.094439    7764 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 04:23:56.094439    7764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 04:23:56.095442    7764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.219.170 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-340000 NodeName:ha-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.219.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.219.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 04:23:56.095442    7764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.219.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-340000"
	  kubeletExtraArgs:
	    node-ip: 172.31.219.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.219.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 04:23:56.095442    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:23:56.108035    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:23:56.138436    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:23:56.138670    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:23:56.151951    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:23:56.171498    7764 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 04:23:56.185108    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0624 04:23:56.202043    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0624 04:23:56.235551    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:23:56.268996    7764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0624 04:23:56.301083    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0624 04:23:56.348034    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:23:56.354061    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:23:56.387829    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:23:56.575871    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:23:56.605517    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.219.170
	I0624 04:23:56.605517    7764 certs.go:194] generating shared ca certs ...
	I0624 04:23:56.605517    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.606272    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:23:56.606272    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:23:56.606898    7764 certs.go:256] generating profile certs ...
	I0624 04:23:56.607696    7764 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:23:56.607893    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt with IP's: []
	I0624 04:23:56.837938    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt ...
	I0624 04:23:56.837938    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.crt: {Name:mk7a961717cd144a9a6226fc54cbc5311507d6a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.838921    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key ...
	I0624 04:23:56.838921    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key: {Name:mkb0e92480b41b7bce6e00ed95fc97da3e4d0eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:56.840444    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd
	I0624 04:23:56.841030    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.223.254]
	I0624 04:23:57.197931    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd ...
	I0624 04:23:57.197931    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd: {Name:mk0f4e42831177c49aaaa6224c50197a22ff86db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.198322    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd ...
	I0624 04:23:57.199325    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd: {Name:mkcb22653d05488567d8983f905ac28f3454628f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.200162    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.4eb81fdd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:23:57.211183    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.4eb81fdd -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:23:57.212163    7764 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:23:57.213222    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt with IP's: []
	I0624 04:23:57.742401    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt ...
	I0624 04:23:57.742401    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt: {Name:mk2780f04cc254cb73365d9b3a14af5e323b09a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.744698    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key ...
	I0624 04:23:57.744698    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key: {Name:mk101c0b70a91ff5ab1d2d4d42de1908d2028086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:23:57.745282    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:23:57.746323    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:23:57.746541    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:23:57.746729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:23:57.746880    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:23:57.747033    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:23:57.747161    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:23:57.755492    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:23:57.756741    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:23:57.756741    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:23:57.757505    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:23:57.757641    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:23:57.757878    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:23:57.758114    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:23:57.758338    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:23:57.758338    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:23:57.758951    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:23:57.759142    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:57.759142    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:23:57.806308    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:23:57.846394    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:23:57.894406    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:23:57.936976    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0624 04:23:57.981900    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:23:58.026283    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:23:58.071174    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:23:58.118729    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:23:58.162689    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:23:58.209251    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:23:58.264136    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 04:23:58.319246    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:23:58.344517    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:23:58.379159    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:23:58.385894    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:23:58.398545    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:23:58.419137    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:23:58.449497    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:23:58.481085    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.489104    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.502444    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:23:58.523530    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:23:58.566687    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:23:58.596556    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.604266    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.616277    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:23:58.643356    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:23:58.673517    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:23:58.680941    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:23:58.681514    7764 kubeadm.go:391] StartCluster: {Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:23:58.691375    7764 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 04:23:58.724770    7764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 04:23:58.755204    7764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 04:23:58.784577    7764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 04:23:58.800464    7764 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 04:23:58.800464    7764 kubeadm.go:156] found existing configuration files:
	
	I0624 04:23:58.811776    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 04:23:58.828218    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 04:23:58.839439    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 04:23:58.867660    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 04:23:58.888876    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 04:23:58.900681    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 04:23:58.929572    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 04:23:58.947003    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 04:23:58.959302    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 04:23:58.986530    7764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 04:23:58.999749    7764 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 04:23:59.011493    7764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 04:23:59.028055    7764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 04:23:59.464999    7764 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 04:24:15.136135    7764 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0624 04:24:15.136316    7764 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 04:24:15.136432    7764 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 04:24:15.136605    7764 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 04:24:15.136605    7764 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 04:24:15.136605    7764 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 04:24:15.139173    7764 out.go:204]   - Generating certificates and keys ...
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 04:24:15.139173    7764 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0624 04:24:15.140270    7764 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0624 04:24:15.140395    7764 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0624 04:24:15.140505    7764 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0624 04:24:15.140605    7764 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-340000 localhost] and IPs [172.31.219.170 127.0.0.1 ::1]
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0624 04:24:15.140853    7764 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-340000 localhost] and IPs [172.31.219.170 127.0.0.1 ::1]
	I0624 04:24:15.141549    7764 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0624 04:24:15.141710    7764 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0624 04:24:15.141810    7764 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 04:24:15.141810    7764 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 04:24:15.142393    7764 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 04:24:15.142645    7764 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 04:24:15.142901    7764 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 04:24:15.143067    7764 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 04:24:15.144863    7764 out.go:204]   - Booting up control plane ...
	I0624 04:24:15.145932    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 04:24:15.146017    7764 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 04:24:15.146750    7764 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 04:24:15.146791    7764 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 04:24:15.146791    7764 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0624 04:24:15.147325    7764 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00217517s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0624 04:24:15.147494    7764 kubeadm.go:309] [api-check] The API server is healthy after 9.024821827s
	I0624 04:24:15.148027    7764 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 04:24:15.148229    7764 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 04:24:15.148229    7764 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 04:24:15.148749    7764 kubeadm.go:309] [mark-control-plane] Marking the node ha-340000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 04:24:15.148996    7764 kubeadm.go:309] [bootstrap-token] Using token: uksowa.dnkew0jmxpcatm2d
	I0624 04:24:15.151402    7764 out.go:204]   - Configuring RBAC rules ...
	I0624 04:24:15.151402    7764 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 04:24:15.151402    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 04:24:15.152109    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 04:24:15.152479    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 04:24:15.152815    7764 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 04:24:15.153107    7764 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 04:24:15.153464    7764 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 04:24:15.153636    7764 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 04:24:15.153766    7764 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 04:24:15.153766    7764 kubeadm.go:309] 
	I0624 04:24:15.153975    7764 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 04:24:15.154026    7764 kubeadm.go:309] 
	I0624 04:24:15.154248    7764 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 04:24:15.154248    7764 kubeadm.go:309] 
	I0624 04:24:15.154248    7764 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 04:24:15.154248    7764 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 04:24:15.154248    7764 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 04:24:15.154248    7764 kubeadm.go:309] 
	I0624 04:24:15.154802    7764 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 04:24:15.154802    7764 kubeadm.go:309] 
	I0624 04:24:15.155014    7764 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 04:24:15.155081    7764 kubeadm.go:309] 
	I0624 04:24:15.155261    7764 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 04:24:15.155396    7764 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 04:24:15.155692    7764 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 04:24:15.155892    7764 kubeadm.go:309] 
	I0624 04:24:15.156103    7764 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 04:24:15.156289    7764 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 04:24:15.156344    7764 kubeadm.go:309] 
	I0624 04:24:15.156537    7764 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token uksowa.dnkew0jmxpcatm2d \
	I0624 04:24:15.156841    7764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 \
	I0624 04:24:15.156896    7764 kubeadm.go:309] 	--control-plane 
	I0624 04:24:15.156949    7764 kubeadm.go:309] 
	I0624 04:24:15.157051    7764 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 04:24:15.157051    7764 kubeadm.go:309] 
	I0624 04:24:15.157051    7764 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token uksowa.dnkew0jmxpcatm2d \
	I0624 04:24:15.157586    7764 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 04:24:15.157628    7764 cni.go:84] Creating CNI manager for ""
	I0624 04:24:15.157628    7764 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 04:24:15.160476    7764 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0624 04:24:15.178108    7764 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0624 04:24:15.185898    7764 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0624 04:24:15.185959    7764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0624 04:24:15.236910    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0624 04:24:15.801252    7764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 04:24:15.816797    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000 minikube.k8s.io/updated_at=2024_06_24T04_24_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=true
	I0624 04:24:15.816797    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:15.865474    7764 ops.go:34] apiserver oom_adj: -16
	I0624 04:24:16.053451    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:16.554318    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:17.055668    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:17.555246    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:18.060924    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:18.560879    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:19.064259    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:19.564139    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:20.066338    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:20.554764    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:21.059251    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:21.563335    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:22.064537    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:22.568492    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:23.054248    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:23.554295    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:24.061114    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:24.561845    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:25.053375    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:25.555713    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:26.058954    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:26.556888    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:27.064526    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 04:24:27.188097    7764 kubeadm.go:1107] duration metric: took 11.386803s to wait for elevateKubeSystemPrivileges
	W0624 04:24:27.188254    7764 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 04:24:27.188254    7764 kubeadm.go:393] duration metric: took 28.5066348s to StartCluster
	I0624 04:24:27.188254    7764 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:24:27.188625    7764 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:24:27.190266    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:24:27.191900    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0624 04:24:27.191900    7764 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:24:27.191900    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:24:27.191900    7764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 04:24:27.191900    7764 addons.go:69] Setting storage-provisioner=true in profile "ha-340000"
	I0624 04:24:27.191900    7764 addons.go:69] Setting default-storageclass=true in profile "ha-340000"
	I0624 04:24:27.191900    7764 addons.go:234] Setting addon storage-provisioner=true in "ha-340000"
	I0624 04:24:27.191900    7764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-340000"
	I0624 04:24:27.191900    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:24:27.191900    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:24:27.193487    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:27.194012    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:27.352442    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0624 04:24:27.795751    7764 start.go:946] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0624 04:24:29.485008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:29.486001    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:29.485008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:29.486001    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:29.486959    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:24:29.487597    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 04:24:29.489085    7764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 04:24:29.489085    7764 cert_rotation.go:137] Starting client certificate rotation controller
	I0624 04:24:29.489508    7764 addons.go:234] Setting addon default-storageclass=true in "ha-340000"
	I0624 04:24:29.489687    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:24:29.490896    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:29.491561    7764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 04:24:29.491561    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 04:24:29.491561    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:31.839622    7764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 04:24:31.839622    7764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 04:24:31.839622    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:31.908106    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:24:34.221948    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:24:34.221985    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:34.222081    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:24:34.771646    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:24:34.771987    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:34.772282    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:24:34.936733    7764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 04:24:37.022840    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:24:37.023554    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:37.023763    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:24:37.165177    7764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 04:24:37.341265    7764 round_trippers.go:463] GET https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0624 04:24:37.341349    7764 round_trippers.go:469] Request Headers:
	I0624 04:24:37.341441    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:24:37.341441    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:24:37.356087    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:24:37.356870    7764 round_trippers.go:463] PUT https://172.31.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0624 04:24:37.356870    7764 round_trippers.go:469] Request Headers:
	I0624 04:24:37.356870    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:24:37.356870    7764 round_trippers.go:473]     Content-Type: application/json
	I0624 04:24:37.356870    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:24:37.360461    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:24:37.365109    7764 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0624 04:24:37.367763    7764 addons.go:510] duration metric: took 10.1758252s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0624 04:24:37.367763    7764 start.go:245] waiting for cluster config update ...
	I0624 04:24:37.367763    7764 start.go:254] writing updated cluster config ...
	I0624 04:24:37.374431    7764 out.go:177] 
	I0624 04:24:37.382235    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:24:37.382235    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:24:37.388747    7764 out.go:177] * Starting "ha-340000-m02" control-plane node in "ha-340000" cluster
	I0624 04:24:37.391432    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:24:37.391432    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:24:37.391432    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:24:37.391969    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:24:37.392074    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:24:37.396446    7764 start.go:360] acquireMachinesLock for ha-340000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:24:37.397173    7764 start.go:364] duration metric: took 727.7µs to acquireMachinesLock for "ha-340000-m02"
	I0624 04:24:37.397173    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:24:37.397173    7764 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0624 04:24:37.400405    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:24:37.400992    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:24:37.400992    7764 client.go:168] LocalClient.Create starting
	I0624 04:24:37.400992    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:24:37.401661    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:24:37.401661    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:24:37.401887    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:24:37.402034    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:24:37.402034    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:24:37.402034    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:24:39.318919    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:24:39.318919    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:39.319549    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:24:41.047822    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:24:41.047822    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:41.048685    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:24:42.553257    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:24:42.553257    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:42.553951    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:24:46.282665    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:24:46.283030    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:46.285249    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:24:46.793132    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:24:47.414021    7764 main.go:141] libmachine: Creating VM...
	I0624 04:24:47.414021    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:24:50.209235    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:24:50.209299    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:50.209299    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:24:50.209299    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:24:51.955162    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:24:51.955946    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:51.955946    7764 main.go:141] libmachine: Creating VHD
	I0624 04:24:51.955946    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:24:55.745966    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 768F4D99-0FAB-4B12-BB36-FE2052C9BA0F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:24:55.746079    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:55.746079    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:24:55.746154    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:24:55.755598    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:24:58.942374    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:24:58.943424    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:24:58.943503    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd' -SizeBytes 20000MB
	I0624 04:25:01.505369    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:01.505433    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:01.505433    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:25:05.172912    7764 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-340000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:25:05.173540    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:05.173540    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000-m02 -DynamicMemoryEnabled $false
	I0624 04:25:07.453955    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:07.454137    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:07.454250    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000-m02 -Count 2
	I0624 04:25:09.608832    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:09.608994    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:09.608994    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\boot2docker.iso'
	I0624 04:25:12.197231    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:12.197231    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:12.197345    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\disk.vhd'
	I0624 04:25:14.893400    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:14.894320    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:14.894381    7764 main.go:141] libmachine: Starting VM...
	I0624 04:25:14.894430    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000-m02
	I0624 04:25:17.989169    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:17.989169    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:17.989169    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:25:17.989814    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:20.355744    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:20.356405    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:20.356405    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:22.989302    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:22.989355    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:23.999995    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:26.249874    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:26.250015    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:26.250090    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:28.897605    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:28.897605    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:29.905864    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:32.170766    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:32.170766    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:32.170897    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:34.772612    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:34.772697    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:35.780004    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:38.017664    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:38.018480    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:38.018552    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:40.601923    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:25:40.601923    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:41.606232    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:43.841714    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:43.841714    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:43.842207    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:46.436094    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:48.637835    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:48.637835    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:48.637835    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:25:48.638845    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:50.886637    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:50.886637    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:50.886919    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:53.505477    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:53.505477    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:53.512076    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:25:53.522242    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:25:53.522242    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:25:53.639546    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:25:53.639617    7764 buildroot.go:166] provisioning hostname "ha-340000-m02"
	I0624 04:25:53.639617    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:25:55.827051    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:25:55.827051    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:55.827150    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:25:58.375036    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:25:58.375036    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:25:58.380693    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:25:58.381469    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:25:58.381469    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000-m02 && echo "ha-340000-m02" | sudo tee /etc/hostname
	I0624 04:25:58.528102    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000-m02
	
	I0624 04:25:58.528102    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:00.719171    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:00.720126    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:00.720206    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:03.334838    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:03.335192    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:03.340445    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:03.340445    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:03.340971    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:26:03.486211    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:26:03.487218    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:26:03.487218    7764 buildroot.go:174] setting up certificates
	I0624 04:26:03.487218    7764 provision.go:84] configureAuth start
	I0624 04:26:03.487218    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:05.662799    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:05.663055    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:05.663055    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:08.255722    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:08.255810    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:08.255880    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:10.427822    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:13.036980    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:13.036980    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:13.037081    7764 provision.go:143] copyHostCerts
	I0624 04:26:13.037081    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:26:13.037081    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:26:13.037081    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:26:13.037806    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:26:13.039153    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:26:13.039509    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:26:13.039548    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:26:13.039651    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:26:13.040819    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:26:13.041319    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:26:13.041319    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:26:13.041534    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:26:13.042819    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000-m02 san=[127.0.0.1 172.31.216.99 ha-340000-m02 localhost minikube]
	I0624 04:26:13.402007    7764 provision.go:177] copyRemoteCerts
	I0624 04:26:13.414354    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:26:13.414354    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:15.595976    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:15.596447    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:15.596532    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:18.162535    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:18.162764    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:18.162764    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:26:18.260547    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8461764s)
	I0624 04:26:18.261519    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:26:18.261519    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:26:18.310608    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:26:18.311101    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:26:18.360412    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:26:18.361400    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0624 04:26:18.409824    7764 provision.go:87] duration metric: took 14.922552s to configureAuth
	I0624 04:26:18.409878    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:26:18.409878    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:26:18.410406    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:20.545474    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:20.545474    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:20.546009    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:23.115223    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:23.115223    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:23.121559    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:23.122148    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:23.122349    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:26:23.250055    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:26:23.250055    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:26:23.250598    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:26:23.250598    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:25.409363    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:25.409363    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:25.409459    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:27.987115    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:27.987185    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:27.992587    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:27.993439    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:27.993532    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.219.170"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:26:28.146924    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.219.170
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:26:28.146924    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:30.337298    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:32.919684    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:32.919684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:32.923724    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:32.924696    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:32.924696    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:26:35.093030    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:26:35.093030    7764 machine.go:97] duration metric: took 46.4550276s to provisionDockerMachine
	I0624 04:26:35.093030    7764 client.go:171] duration metric: took 1m57.6916074s to LocalClient.Create
	I0624 04:26:35.093030    7764 start.go:167] duration metric: took 1m57.6916074s to libmachine.API.Create "ha-340000"
	I0624 04:26:35.093030    7764 start.go:293] postStartSetup for "ha-340000-m02" (driver="hyperv")
	I0624 04:26:35.093030    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:26:35.106015    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:26:35.106015    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:37.247227    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:37.247227    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:37.247556    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:39.804070    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:39.804720    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:39.804853    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:26:39.901290    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7952578s)
	I0624 04:26:39.914835    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:26:39.922603    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:26:39.922603    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:26:39.922603    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:26:39.923840    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:26:39.923840    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:26:39.936239    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:26:39.955354    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:26:40.001075    7764 start.go:296] duration metric: took 4.908027s for postStartSetup
	I0624 04:26:40.003712    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:42.166041    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:42.166041    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:42.166806    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:44.756066    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:44.756308    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:44.756442    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:26:44.758843    7764 start.go:128] duration metric: took 2m7.3612037s to createHost
	I0624 04:26:44.758843    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:46.909322    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:46.910138    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:46.910232    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:49.479126    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:49.479126    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:49.486897    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:49.487454    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:49.487521    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:26:49.611913    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228409.617659899
	
	I0624 04:26:49.611992    7764 fix.go:216] guest clock: 1719228409.617659899
	I0624 04:26:49.611992    7764 fix.go:229] Guest: 2024-06-24 04:26:49.617659899 -0700 PDT Remote: 2024-06-24 04:26:44.7588432 -0700 PDT m=+340.421999101 (delta=4.858816699s)
	I0624 04:26:49.612053    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:51.806686    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:51.806686    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:51.807796    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:54.360122    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:54.360717    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:54.366743    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:26:54.367456    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.99 22 <nil> <nil>}
	I0624 04:26:54.367456    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228409
	I0624 04:26:54.501554    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:26:49 UTC 2024
	
	I0624 04:26:54.501554    7764 fix.go:236] clock set: Mon Jun 24 11:26:49 UTC 2024
	 (err=<nil>)
	I0624 04:26:54.501554    7764 start.go:83] releasing machines lock for "ha-340000-m02", held for 2m17.1038801s
	I0624 04:26:54.501554    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:56.656929    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:26:56.656929    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:56.657237    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:26:59.232814    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:26:59.233696    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:26:59.238013    7764 out.go:177] * Found network options:
	I0624 04:26:59.240974    7764 out.go:177]   - NO_PROXY=172.31.219.170
	W0624 04:26:59.243674    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:26:59.245988    7764 out.go:177]   - NO_PROXY=172.31.219.170
	W0624 04:26:59.248651    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:26:59.250072    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:26:59.253482    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:26:59.253655    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:26:59.263647    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 04:26:59.263647    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m02 ).state
	I0624 04:27:01.487876    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:01.488890    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:01.488928    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:01.488928    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:01.488996    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:01.488996    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:04.227031    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:27:04.227031    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:04.227248    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:27:04.253655    7764 main.go:141] libmachine: [stdout =====>] : 172.31.216.99
	
	I0624 04:27:04.254270    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:04.254455    7764 sshutil.go:53] new ssh client: &{IP:172.31.216.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m02\id_rsa Username:docker}
	I0624 04:27:04.325713    7764 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0619239s)
	W0624 04:27:04.325787    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:27:04.339117    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:27:04.410631    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:27:04.410771    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:27:04.410771    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1572701s)
	I0624 04:27:04.410943    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:27:04.456796    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:27:04.488782    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:27:04.511209    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:27:04.525207    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:27:04.558383    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:27:04.594351    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:27:04.632156    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:27:04.668644    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:27:04.700592    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:27:04.731598    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:27:04.764902    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:27:04.797247    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:27:04.829805    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:27:04.865930    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:05.068575    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:27:05.101121    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:27:05.115315    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:27:05.153433    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:27:05.186606    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:27:05.229598    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:27:05.265598    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:27:05.304154    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:27:05.362709    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:27:05.384787    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:27:05.427339    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:27:05.444634    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:27:05.460900    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:27:05.500871    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:27:05.685702    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:27:05.875302    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:27:05.875472    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:27:05.922983    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:06.124475    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:27:08.649108    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.524624s)
	I0624 04:27:08.662442    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:27:08.698813    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:27:08.734411    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:27:08.926039    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:27:09.149621    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:09.357548    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:27:09.401745    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:27:09.440775    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:09.652315    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:27:09.760985    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:27:09.773475    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:27:09.783373    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:27:09.798224    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:27:09.815600    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:27:09.874275    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:27:09.885275    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:27:09.928991    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:27:09.966535    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:27:09.969141    7764 out.go:177]   - env NO_PROXY=172.31.219.170
	I0624 04:27:09.971901    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:27:09.976407    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:27:09.980875    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:27:09.980875    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:27:09.993751    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:27:09.998900    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:27:10.021742    7764 mustload.go:65] Loading cluster: ha-340000
	I0624 04:27:10.022138    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:27:10.022138    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:12.145793    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:12.146689    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:12.146689    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:27:12.147545    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.216.99
	I0624 04:27:12.147545    7764 certs.go:194] generating shared ca certs ...
	I0624 04:27:12.147672    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.148287    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:27:12.148567    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:27:12.148856    7764 certs.go:256] generating profile certs ...
	I0624 04:27:12.149438    7764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:27:12.149513    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773
	I0624 04:27:12.149734    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.216.99 172.31.223.254]
	I0624 04:27:12.535121    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 ...
	I0624 04:27:12.535121    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773: {Name:mk8a3e94f1cd57107053c19999e9ccd02984f9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.537249    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773 ...
	I0624 04:27:12.537249    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773: {Name:mk3e4f8cf08b142d4c6b8b2e4d0c2e9e09cde3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:27:12.538494    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.04d02773 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:27:12.548871    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.04d02773 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:27:12.549729    7764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:27:12.549729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:27:12.549729    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:27:12.550813    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:27:12.550846    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:27:12.551173    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:27:12.551173    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:27:12.551525    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:27:12.551851    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:27:12.552013    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:27:12.552013    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:27:12.552625    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:27:12.552788    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:27:12.553076    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:27:12.553339    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:27:12.553609    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:27:12.553609    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:27:12.554183    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:27:12.554343    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:12.554488    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:14.719497    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:14.719833    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:14.719833    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:17.350977    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:27:17.351051    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:17.351051    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:27:17.461926    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0624 04:27:17.470044    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0624 04:27:17.501978    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0624 04:27:17.511100    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0624 04:27:17.543488    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0624 04:27:17.550954    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0624 04:27:17.583396    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0624 04:27:17.590351    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0624 04:27:17.627495    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0624 04:27:17.633981    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0624 04:27:17.667148    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0624 04:27:17.673386    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0624 04:27:17.693848    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:27:17.745421    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:27:17.791072    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:27:17.839792    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:27:17.886067    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0624 04:27:17.938597    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:27:17.985439    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:27:18.029804    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:27:18.082647    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:27:18.126440    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:27:18.175997    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:27:18.225216    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0624 04:27:18.257530    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0624 04:27:18.290661    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0624 04:27:18.321759    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0624 04:27:18.355194    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0624 04:27:18.389446    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0624 04:27:18.423705    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0624 04:27:18.466299    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:27:18.488719    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:27:18.524815    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.532250    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.546205    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:27:18.570264    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:27:18.603714    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:27:18.634492    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.641875    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.655114    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:27:18.680930    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:27:18.712285    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:27:18.746203    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:27:18.753921    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:27:18.767115    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:27:18.790163    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:27:18.824124    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:27:18.830338    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:27:18.830338    7764 kubeadm.go:928] updating node {m02 172.31.216.99 8443 v1.30.2 docker true true} ...
	I0624 04:27:18.830338    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:27:18.830861    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:27:18.844401    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:27:18.871458    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:27:18.872308    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:27:18.885304    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:27:18.906359    7764 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0624 04:27:18.917667    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0624 04:27:18.940653    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl
	I0624 04:27:18.940770    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet
	I0624 04:27:18.940770    7764 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm
	I0624 04:27:20.019379    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:27:20.031677    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:27:20.040388    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 04:27:20.040619    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0624 04:27:22.295204    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:27:22.311229    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:27:22.319343    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 04:27:22.319532    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0624 04:27:24.787183    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:27:24.813058    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:27:24.827112    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:27:24.836021    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 04:27:24.836326    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0624 04:27:25.350318    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0624 04:27:25.369869    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0624 04:27:25.404681    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:27:25.434302    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0624 04:27:25.475142    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:27:25.481186    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:27:25.513681    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:27:25.719380    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:27:25.749842    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:27:25.749842    7764 start.go:316] joinCluster: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:27:25.749842    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0624 04:27:25.750824    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:27:27.897413    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:27:27.898468    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:27.898580    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:27:30.474274    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:27:30.475230    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:27:30.475465    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:27:30.691544    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9407016s)
	I0624 04:27:30.691544    7764 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:27:30.691544    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jly6bg.uk30wjiudedznfhh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m02 --control-plane --apiserver-advertise-address=172.31.216.99 --apiserver-bind-port=8443"
	I0624 04:28:10.348302    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jly6bg.uk30wjiudedznfhh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m02 --control-plane --apiserver-advertise-address=172.31.216.99 --apiserver-bind-port=8443": (39.6566068s)
	I0624 04:28:10.348302    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0624 04:28:11.123441    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000-m02 minikube.k8s.io/updated_at=2024_06_24T04_28_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=false
	I0624 04:28:11.296357    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-340000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0624 04:28:11.477881    7764 start.go:318] duration metric: took 45.7278657s to joinCluster
	I0624 04:28:11.479101    7764 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:28:11.479942    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:28:11.487444    7764 out.go:177] * Verifying Kubernetes components...
	I0624 04:28:11.505070    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:28:11.857429    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:28:11.909471    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:28:11.910146    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0624 04:28:11.910146    7764 kubeadm.go:477] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.219.170:8443
	I0624 04:28:11.911347    7764 node_ready.go:35] waiting up to 6m0s for node "ha-340000-m02" to be "Ready" ...
	I0624 04:28:11.911438    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:11.911588    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:11.911588    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:11.911588    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:11.927463    7764 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 04:28:12.413201    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:12.413550    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:12.413550    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:12.413626    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:12.444372    7764 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0624 04:28:12.925886    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:12.925983    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:12.925983    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:12.925983    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:12.932451    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:13.417147    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:13.417227    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:13.417227    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:13.417227    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:13.423642    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:13.926152    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:13.926272    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:13.926272    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:13.926272    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:13.931245    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:13.932638    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:14.425494    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:14.425494    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:14.425494    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:14.425494    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:14.431769    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:14.916726    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:14.916779    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:14.916779    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:14.916779    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:14.921933    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:15.414395    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:15.414437    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:15.414437    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:15.414437    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:15.424101    7764 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 04:28:15.921512    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:15.921512    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:15.921512    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:15.921512    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:15.925599    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:16.427098    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:16.427098    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:16.427098    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:16.427098    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:16.432021    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:16.433661    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:16.920270    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:16.920303    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:16.920303    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:16.920303    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:16.924782    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:17.414532    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:17.414701    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:17.414701    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:17.414763    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:17.419689    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:17.920789    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:17.920789    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:17.920789    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:17.920789    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:17.925280    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:18.414334    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:18.414334    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:18.414717    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:18.414717    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:18.422941    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:28:18.921289    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:18.921289    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:18.921289    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:18.921289    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.042617    7764 round_trippers.go:574] Response Status: 200 OK in 121 milliseconds
	I0624 04:28:19.043561    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:19.415270    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:19.415270    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:19.415572    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:19.415572    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.423845    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:28:19.918838    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:19.918911    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:19.918911    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:19.918911    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:19.924705    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:20.422302    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:20.422302    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:20.422302    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:20.422302    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:20.439795    7764 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0624 04:28:20.912535    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:20.912750    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:20.912750    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:20.912750    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:20.917252    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:21.412416    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:21.412416    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:21.412517    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:21.412517    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:21.417804    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:21.419265    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:21.927253    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:21.927474    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:21.927474    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:21.927474    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:21.931529    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:22.425328    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:22.425533    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:22.425533    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:22.425632    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:22.430375    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:22.924882    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:22.924882    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:22.924882    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:22.924882    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:22.929881    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:23.427060    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:23.427060    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:23.427060    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:23.427060    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:23.432229    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:23.434031    7764 node_ready.go:53] node "ha-340000-m02" has status "Ready":"False"
	I0624 04:28:23.925304    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:23.925304    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:23.925304    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:23.925304    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:23.928986    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.425319    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.425703    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.425703    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.425703    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.430231    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.926082    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.926082    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.926082    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.926082    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.930652    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.931873    7764 node_ready.go:49] node "ha-340000-m02" has status "Ready":"True"
	I0624 04:28:24.931957    7764 node_ready.go:38] duration metric: took 13.0205605s for node "ha-340000-m02" to be "Ready" ...
	I0624 04:28:24.932080    7764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:28:24.932215    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:24.932327    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.932327    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.932327    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.939750    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:24.948492    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.948492    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6xxtk
	I0624 04:28:24.949022    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.949022    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.949022    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.952872    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.954581    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.954581    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.954581    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.954581    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.959287    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.959977    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.959977    7764 pod_ready.go:81] duration metric: took 11.4848ms for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.959977    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.959977    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6zh6m
	I0624 04:28:24.959977    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.959977    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.959977    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.964602    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.965299    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.965299    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.965299    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.965502    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.969747    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.971353    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.971456    7764 pod_ready.go:81] duration metric: took 11.4793ms for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.971456    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.971641    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000
	I0624 04:28:24.971778    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.971801    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.971801    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.975248    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.975850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:24.975850    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.975850    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.975850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.980638    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:24.981301    7764 pod_ready.go:92] pod "etcd-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.981301    7764 pod_ready.go:81] duration metric: took 9.8443ms for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.981301    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.981301    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m02
	I0624 04:28:24.981301    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.981301    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.981301    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.984815    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:28:24.985812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:24.985812    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:24.985812    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:24.985812    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:24.990897    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:24.990897    7764 pod_ready.go:92] pod "etcd-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:24.990897    7764 pod_ready.go:81] duration metric: took 9.5963ms for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:24.990897    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.129269    7764 request.go:629] Waited for 138.2823ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:28:25.129345    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:28:25.129345    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.129345    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.129345    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.134197    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:25.332822    7764 request.go:629] Waited for 196.6808ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:25.332822    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:25.332822    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.333047    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.333047    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.339665    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:28:25.340354    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:25.340354    7764 pod_ready.go:81] duration metric: took 349.4557ms for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.340354    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:25.535873    7764 request.go:629] Waited for 195.3415ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.536188    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.536188    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.536188    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.536188    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.540667    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:25.740878    7764 request.go:629] Waited for 198.6579ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:25.741071    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:25.741071    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.741071    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.741071    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.755930    7764 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 04:28:25.929219    7764 request.go:629] Waited for 78.8704ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.929219    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:25.929219    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:25.929219    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:25.929495    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:25.945459    7764 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 04:28:26.134444    7764 request.go:629] Waited for 187.8089ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.134510    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.134571    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.134571    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.134645    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.140071    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.353705    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:28:26.353705    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.353705    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.353705    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.359521    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.526359    7764 request.go:629] Waited for 165.4633ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.526448    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:26.526448    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.526543    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.526543    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.532021    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.532527    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:26.532527    7764 pod_ready.go:81] duration metric: took 1.1921684s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.532527    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.732704    7764 request.go:629] Waited for 200.1103ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:28:26.732812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:28:26.732812    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.732925    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.732925    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.738833    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:26.936664    7764 request.go:629] Waited for 197.0492ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:26.936664    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:26.936664    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:26.936664    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:26.936664    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:26.941663    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:26.942676    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:26.942676    7764 pod_ready.go:81] duration metric: took 410.1475ms for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:26.942676    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.127937    7764 request.go:629] Waited for 184.7747ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:28:27.128099    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:28:27.128099    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.128099    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.128099    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.132699    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:27.331014    7764 request.go:629] Waited for 196.3222ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.331322    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.331322    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.331410    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.331425    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.336819    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:27.337011    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:27.337011    7764 pod_ready.go:81] duration metric: took 394.3333ms for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.337011    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.535919    7764 request.go:629] Waited for 198.3729ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:28:27.536120    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:28:27.536195    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.536195    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.536195    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.541904    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:27.739023    7764 request.go:629] Waited for 195.8991ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.739136    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:27.739136    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.739136    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.739355    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.746854    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:27.748781    7764 pod_ready.go:92] pod "kube-proxy-87bnm" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:27.748849    7764 pod_ready.go:81] duration metric: took 411.837ms for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.748849    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:27.927519    7764 request.go:629] Waited for 178.5983ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:28:27.927980    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:28:27.927980    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:27.928033    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:27.928033    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:27.933267    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.132952    7764 request.go:629] Waited for 198.3849ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.133121    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.133121    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.133121    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.133121    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.137793    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:28.140164    7764 pod_ready.go:92] pod "kube-proxy-jktx8" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.140164    7764 pod_ready.go:81] duration metric: took 391.3129ms for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.140249    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.335196    7764 request.go:629] Waited for 194.6957ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:28:28.335344    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:28:28.335344    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.335459    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.335459    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.340533    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.536425    7764 request.go:629] Waited for 193.8862ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.536783    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:28:28.536783    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.536783    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.536783    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.541533    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:28.543580    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.543580    7764 pod_ready.go:81] duration metric: took 403.3296ms for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.543580    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.738335    7764 request.go:629] Waited for 194.4595ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:28:28.738431    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:28:28.738431    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.738506    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.738506    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.748022    7764 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 04:28:28.941842    7764 request.go:629] Waited for 193.3654ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:28.941928    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:28:28.941928    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:28.941928    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:28.941928    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:28.947294    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:28.948720    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:28:28.948834    7764 pod_ready.go:81] duration metric: took 405.2524ms for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:28:28.948834    7764 pod_ready.go:38] duration metric: took 4.0166767s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:28:28.948834    7764 api_server.go:52] waiting for apiserver process to appear ...
	I0624 04:28:28.962227    7764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:28:28.990048    7764 api_server.go:72] duration metric: took 17.5108258s to wait for apiserver process to appear ...
	I0624 04:28:28.990169    7764 api_server.go:88] waiting for apiserver healthz status ...
	I0624 04:28:28.990250    7764 api_server.go:253] Checking apiserver healthz at https://172.31.219.170:8443/healthz ...
	I0624 04:28:29.001041    7764 api_server.go:279] https://172.31.219.170:8443/healthz returned 200:
	ok
	I0624 04:28:29.001264    7764 round_trippers.go:463] GET https://172.31.219.170:8443/version
	I0624 04:28:29.001304    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.001347    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.001347    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.001974    7764 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 04:28:29.003171    7764 api_server.go:141] control plane version: v1.30.2
	I0624 04:28:29.003281    7764 api_server.go:131] duration metric: took 13.0017ms to wait for apiserver health ...
	I0624 04:28:29.003281    7764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 04:28:29.130016    7764 request.go:629] Waited for 126.6316ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.130380    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.130380    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.130485    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.130485    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.137976    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:28:29.145789    7764 system_pods.go:59] 17 kube-system pods found
	I0624 04:28:29.145789    7764 system_pods.go:61] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:28:29.145789    7764 system_pods.go:61] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:28:29.145789    7764 system_pods.go:74] duration metric: took 142.5067ms to wait for pod list to return data ...
	I0624 04:28:29.145789    7764 default_sa.go:34] waiting for default service account to be created ...
	I0624 04:28:29.332967    7764 request.go:629] Waited for 187.1779ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:28:29.333090    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:28:29.333090    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.333090    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.333090    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.338765    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:28:29.339489    7764 default_sa.go:45] found service account: "default"
	I0624 04:28:29.339489    7764 default_sa.go:55] duration metric: took 193.6992ms for default service account to be created ...
	I0624 04:28:29.339489    7764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 04:28:29.535869    7764 request.go:629] Waited for 196.3794ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.536019    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:28:29.536019    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.536019    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.536094    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.547818    7764 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0624 04:28:29.560320    7764 system_pods.go:86] 17 kube-system pods found
	I0624 04:28:29.560320    7764 system_pods.go:89] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:28:29.560320    7764 system_pods.go:89] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:28:29.560320    7764 system_pods.go:89] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:28:29.560881    7764 system_pods.go:89] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:28:29.560944    7764 system_pods.go:89] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:28:29.560987    7764 system_pods.go:89] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:28:29.560987    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:28:29.561047    7764 system_pods.go:89] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:28:29.561901    7764 system_pods.go:89] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:28:29.561901    7764 system_pods.go:126] duration metric: took 222.4119ms to wait for k8s-apps to be running ...
	I0624 04:28:29.562055    7764 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 04:28:29.574442    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:28:29.606095    7764 system_svc.go:56] duration metric: took 44.0393ms WaitForService to wait for kubelet
	I0624 04:28:29.606198    7764 kubeadm.go:576] duration metric: took 18.1269732s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:28:29.606270    7764 node_conditions.go:102] verifying NodePressure condition ...
	I0624 04:28:29.740727    7764 request.go:629] Waited for 134.2019ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes
	I0624 04:28:29.740855    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes
	I0624 04:28:29.740855    7764 round_trippers.go:469] Request Headers:
	I0624 04:28:29.740855    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:28:29.740855    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:28:29.745626    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:28:29.747200    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:28:29.747200    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:28:29.747200    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:28:29.747200    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:28:29.747200    7764 node_conditions.go:105] duration metric: took 140.9292ms to run NodePressure ...
	I0624 04:28:29.747200    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:28:29.747200    7764 start.go:254] writing updated cluster config ...
	I0624 04:28:29.751934    7764 out.go:177] 
	I0624 04:28:29.766046    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:28:29.766832    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:28:29.774794    7764 out.go:177] * Starting "ha-340000-m03" control-plane node in "ha-340000" cluster
	I0624 04:28:29.777301    7764 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 04:28:29.777301    7764 cache.go:56] Caching tarball of preloaded images
	I0624 04:28:29.777301    7764 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 04:28:29.777301    7764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 04:28:29.777301    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:28:29.781158    7764 start.go:360] acquireMachinesLock for ha-340000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 04:28:29.781158    7764 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-340000-m03"
	I0624 04:28:29.781158    7764 start.go:93] Provisioning new machine with config: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:28:29.781158    7764 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0624 04:28:29.784224    7764 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 04:28:29.785202    7764 start.go:159] libmachine.API.Create for "ha-340000" (driver="hyperv")
	I0624 04:28:29.785202    7764 client.go:168] LocalClient.Create starting
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:28:29.785202    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Decoding PEM data...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: Parsing certificate...
	I0624 04:28:29.786208    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 04:28:31.780286    7764 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 04:28:31.780368    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:31.780402    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 04:28:33.569119    7764 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 04:28:33.569119    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:33.569266    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:28:35.138412    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:28:35.138412    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:35.139253    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:28:38.939051    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:28:38.939109    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:38.941295    7764 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 04:28:39.408348    7764 main.go:141] libmachine: Creating SSH key...
	I0624 04:28:39.639687    7764 main.go:141] libmachine: Creating VM...
	I0624 04:28:39.639687    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 04:28:42.623268    7764 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 04:28:42.623268    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:42.623268    7764 main.go:141] libmachine: Using switch "Default Switch"
	I0624 04:28:42.623422    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 04:28:44.395580    7764 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 04:28:44.395580    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:44.395696    7764 main.go:141] libmachine: Creating VHD
	I0624 04:28:44.395696    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 04:28:48.321070    7764 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B83AC5C6-1D67-49AC-95A5-608E946249BA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 04:28:48.321070    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:48.321070    7764 main.go:141] libmachine: Writing magic tar header
	I0624 04:28:48.321070    7764 main.go:141] libmachine: Writing SSH key tar header
	I0624 04:28:48.333118    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 04:28:51.596942    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:28:51.596942    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:51.597759    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd' -SizeBytes 20000MB
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:54.226005    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-340000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:28:58.041441    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-340000-m03 -DynamicMemoryEnabled $false
	I0624 04:29:00.368389    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:00.368389    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:00.368481    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-340000-m03 -Count 2
	I0624 04:29:02.606386    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:02.607068    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:02.607068    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\boot2docker.iso'
	I0624 04:29:05.261886    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:05.261954    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:05.262027    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-340000-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\disk.vhd'
	I0624 04:29:08.070160    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:08.070350    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:08.070350    7764 main.go:141] libmachine: Starting VM...
	I0624 04:29:08.070443    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-340000-m03
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:11.195279    7764 main.go:141] libmachine: Waiting for host to start...
	I0624 04:29:11.195279    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:13.598113    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:13.598113    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:13.598340    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:16.203468    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:16.203544    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:17.209068    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:19.524986    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:19.524986    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:19.525195    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:22.187835    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:22.188843    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:23.198023    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:25.481002    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:25.481290    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:25.481290    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:28.104506    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:28.104506    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:29.106735    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:31.394356    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:31.394356    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:31.394743    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:33.998177    7764 main.go:141] libmachine: [stdout =====>] : 
	I0624 04:29:33.998177    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:35.007839    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:37.326080    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:37.326122    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:37.326219    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:40.003971    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:40.003971    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:40.004066    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:42.217863    7764 machine.go:94] provisionDockerMachine start ...
	I0624 04:29:42.217863    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:44.517721    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:44.518228    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:44.518527    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:47.201560    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:47.201560    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:47.207577    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:47.218297    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:47.218297    7764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 04:29:47.360483    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 04:29:47.360551    7764 buildroot.go:166] provisioning hostname "ha-340000-m03"
	I0624 04:29:47.360618    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:49.565103    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:52.193887    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:52.193887    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:52.201303    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:52.201303    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:52.201927    7764 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-340000-m03 && echo "ha-340000-m03" | sudo tee /etc/hostname
	I0624 04:29:52.364604    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-340000-m03
	
	I0624 04:29:52.365299    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:54.554520    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:29:57.182693    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:29:57.182693    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:57.189128    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:29:57.189128    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:29:57.189653    7764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-340000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-340000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-340000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 04:29:57.332038    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 04:29:57.332182    7764 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 04:29:57.332182    7764 buildroot.go:174] setting up certificates
	I0624 04:29:57.332182    7764 provision.go:84] configureAuth start
	I0624 04:29:57.332331    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:29:59.502324    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:29:59.503315    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:29:59.503378    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:02.186897    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:02.186897    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:02.187015    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:04.398633    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:07.011118    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:07.011118    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:07.011118    7764 provision.go:143] copyHostCerts
	I0624 04:30:07.011118    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 04:30:07.011654    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 04:30:07.011654    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 04:30:07.011920    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 04:30:07.013185    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 04:30:07.013934    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 04:30:07.013934    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 04:30:07.014668    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 04:30:07.015554    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 04:30:07.016090    7764 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 04:30:07.016090    7764 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 04:30:07.016393    7764 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 04:30:07.017875    7764 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-340000-m03 san=[127.0.0.1 172.31.215.46 ha-340000-m03 localhost minikube]
	I0624 04:30:07.220794    7764 provision.go:177] copyRemoteCerts
	I0624 04:30:07.235765    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 04:30:07.235765    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:09.413047    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:09.413047    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:09.413954    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:12.074745    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:12.075228    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:12.075332    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:12.182661    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9468776s)
	I0624 04:30:12.182661    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 04:30:12.183193    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 04:30:12.231784    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 04:30:12.231784    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 04:30:12.280832    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 04:30:12.281548    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0624 04:30:12.334118    7764 provision.go:87] duration metric: took 15.001877s to configureAuth
	I0624 04:30:12.334118    7764 buildroot.go:189] setting minikube options for container-runtime
	I0624 04:30:12.334804    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:30:12.334915    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:14.531074    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:14.532040    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:14.532156    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:17.153618    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:17.154602    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:17.160547    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:17.161272    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:17.161272    7764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 04:30:17.301973    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 04:30:17.301973    7764 buildroot.go:70] root file system type: tmpfs
	I0624 04:30:17.301973    7764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 04:30:17.301973    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:19.538328    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:22.189547    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:22.190217    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:22.196085    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:22.196931    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:22.196931    7764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.219.170"
	Environment="NO_PROXY=172.31.219.170,172.31.216.99"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 04:30:22.355754    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.219.170
	Environment=NO_PROXY=172.31.219.170,172.31.216.99
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 04:30:22.355852    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:24.538476    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:24.538476    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:24.538564    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:27.211904    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:27.211904    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:27.219219    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:27.219430    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:27.219430    7764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 04:30:29.424476    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 04:30:29.424476    7764 machine.go:97] duration metric: took 47.206429s to provisionDockerMachine
	I0624 04:30:29.424476    7764 client.go:171] duration metric: took 1m59.6388071s to LocalClient.Create
	I0624 04:30:29.424476    7764 start.go:167] duration metric: took 1m59.6388071s to libmachine.API.Create "ha-340000"
	I0624 04:30:29.424476    7764 start.go:293] postStartSetup for "ha-340000-m03" (driver="hyperv")
	I0624 04:30:29.424476    7764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 04:30:29.436940    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 04:30:29.436940    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:31.668008    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:31.668008    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:31.668381    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:34.298028    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:34.298028    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:34.298967    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:34.413175    7764 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9762153s)
	I0624 04:30:34.426626    7764 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 04:30:34.433763    7764 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 04:30:34.433763    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 04:30:34.434369    7764 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 04:30:34.435408    7764 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 04:30:34.435408    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 04:30:34.447296    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 04:30:34.473393    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 04:30:34.531482    7764 start.go:296] duration metric: took 5.106986s for postStartSetup
	I0624 04:30:34.534738    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:36.714018    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:36.714235    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:36.714235    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:39.323294    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:39.323294    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:39.324343    7764 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\config.json ...
	I0624 04:30:39.326579    7764 start.go:128] duration metric: took 2m9.5449154s to createHost
	I0624 04:30:39.326579    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:41.514851    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:41.515375    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:41.515375    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:44.135098    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:44.135098    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:44.141184    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:44.141663    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:44.141731    7764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 04:30:44.276871    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719228644.278623547
	
	I0624 04:30:44.276984    7764 fix.go:216] guest clock: 1719228644.278623547
	I0624 04:30:44.276984    7764 fix.go:229] Guest: 2024-06-24 04:30:44.278623547 -0700 PDT Remote: 2024-06-24 04:30:39.3265792 -0700 PDT m=+574.988835701 (delta=4.952044347s)
	I0624 04:30:44.277077    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:46.464541    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:46.464907    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:46.465156    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:49.078161    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:49.078926    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:49.085962    7764 main.go:141] libmachine: Using SSH client type: native
	I0624 04:30:49.085962    7764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.215.46 22 <nil> <nil>}
	I0624 04:30:49.085962    7764 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719228644
	I0624 04:30:49.236292    7764 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 11:30:44 UTC 2024
	
	I0624 04:30:49.236352    7764 fix.go:236] clock set: Mon Jun 24 11:30:44 UTC 2024
	 (err=<nil>)
	I0624 04:30:49.236352    7764 start.go:83] releasing machines lock for "ha-340000-m03", held for 2m19.4546498s
	I0624 04:30:49.236603    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:51.396838    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:51.397038    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:51.397327    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:54.042997    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:54.042997    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:54.045907    7764 out.go:177] * Found network options:
	I0624 04:30:54.048782    7764 out.go:177]   - NO_PROXY=172.31.219.170,172.31.216.99
	W0624 04:30:54.050916    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.050916    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:30:54.053276    7764 out.go:177]   - NO_PROXY=172.31.219.170,172.31.216.99
	W0624 04:30:54.054917    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.054917    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.056874    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 04:30:54.056874    7764 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 04:30:54.058948    7764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 04:30:54.058948    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:54.070138    7764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 04:30:54.070138    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000-m03 ).state
	I0624 04:30:56.323684    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:56.323684    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:56.324154    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000-m03 ).networkadapters[0]).ipaddresses[0]
	I0624 04:30:59.018407    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:59.018496    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:59.018496    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:59.043926    7764 main.go:141] libmachine: [stdout =====>] : 172.31.215.46
	
	I0624 04:30:59.043926    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:30:59.044183    7764 sshutil.go:53] new ssh client: &{IP:172.31.215.46 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000-m03\id_rsa Username:docker}
	I0624 04:30:59.123817    7764 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0536586s)
	W0624 04:30:59.123913    7764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 04:30:59.137150    7764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 04:30:59.198368    7764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 04:30:59.198368    7764 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1394001s)
	I0624 04:30:59.198368    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:30:59.199091    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:30:59.249441    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 04:30:59.279412    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 04:30:59.298354    7764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 04:30:59.311075    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 04:30:59.345601    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:30:59.380032    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 04:30:59.416575    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 04:30:59.448533    7764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 04:30:59.482706    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 04:30:59.516475    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 04:30:59.548066    7764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 04:30:59.582199    7764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 04:30:59.612398    7764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 04:30:59.641567    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:30:59.839919    7764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 04:30:59.874996    7764 start.go:494] detecting cgroup driver to use...
	I0624 04:30:59.888286    7764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 04:30:59.932500    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:30:59.967424    7764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 04:31:00.020664    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 04:31:00.062903    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:31:00.103099    7764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 04:31:00.164946    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 04:31:00.190709    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 04:31:00.240234    7764 ssh_runner.go:195] Run: which cri-dockerd
	I0624 04:31:00.257983    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 04:31:00.276180    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 04:31:00.322485    7764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 04:31:00.542359    7764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 04:31:00.734150    7764 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 04:31:00.734370    7764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 04:31:00.779653    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:00.981710    7764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 04:31:03.515854    7764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5341346s)
	I0624 04:31:03.527767    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 04:31:03.563828    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:31:03.599962    7764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 04:31:03.796479    7764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 04:31:04.004600    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:04.212703    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 04:31:04.257264    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 04:31:04.297196    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:04.515786    7764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 04:31:04.623019    7764 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 04:31:04.637062    7764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 04:31:04.645350    7764 start.go:562] Will wait 60s for crictl version
	I0624 04:31:04.662781    7764 ssh_runner.go:195] Run: which crictl
	I0624 04:31:04.680982    7764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 04:31:04.736333    7764 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 04:31:04.747544    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:31:04.792538    7764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 04:31:04.827774    7764 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 04:31:04.830135    7764 out.go:177]   - env NO_PROXY=172.31.219.170
	I0624 04:31:04.832276    7764 out.go:177]   - env NO_PROXY=172.31.219.170,172.31.216.99
	I0624 04:31:04.835401    7764 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 04:31:04.838352    7764 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 04:31:04.841159    7764 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 04:31:04.841159    7764 ip.go:210] interface addr: 172.31.208.1/20
	I0624 04:31:04.854165    7764 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 04:31:04.861182    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:31:04.883034    7764 mustload.go:65] Loading cluster: ha-340000
	I0624 04:31:04.883693    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:31:04.883975    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:07.051544    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:07.051544    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:07.051544    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:31:07.052940    7764 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000 for IP: 172.31.215.46
	I0624 04:31:07.053001    7764 certs.go:194] generating shared ca certs ...
	I0624 04:31:07.053057    7764 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.053632    7764 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 04:31:07.053952    7764 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 04:31:07.054200    7764 certs.go:256] generating profile certs ...
	I0624 04:31:07.054451    7764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\client.key
	I0624 04:31:07.054451    7764 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28
	I0624 04:31:07.055012    7764 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.219.170 172.31.216.99 172.31.215.46 172.31.223.254]
	I0624 04:31:07.218618    7764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 ...
	I0624 04:31:07.218618    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28: {Name:mk7c1cfb6b5dddd8b7b8e040cea23942dd2d96aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.220588    7764 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28 ...
	I0624 04:31:07.220588    7764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28: {Name:mk345b96410dd305797032f83b6a7a4525eab593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 04:31:07.221577    7764 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt.26900e28 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt
	I0624 04:31:07.233081    7764 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key.26900e28 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key
	I0624 04:31:07.234035    7764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key
	I0624 04:31:07.234035    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 04:31:07.234035    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 04:31:07.234927    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 04:31:07.235144    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 04:31:07.235324    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 04:31:07.235958    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 04:31:07.236186    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 04:31:07.236765    7764 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 04:31:07.236926    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 04:31:07.237694    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 04:31:07.237694    7764 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 04:31:07.238419    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 04:31:07.238548    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:07.238548    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 04:31:07.238548    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:09.443461    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:09.443461    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:09.443721    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:31:12.107653    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:31:12.107653    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:12.107902    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:31:12.224109    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0624 04:31:12.232391    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0624 04:31:12.270365    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0624 04:31:12.279016    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0624 04:31:12.310163    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0624 04:31:12.317495    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0624 04:31:12.353252    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0624 04:31:12.360469    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0624 04:31:12.407372    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0624 04:31:12.414108    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0624 04:31:12.448983    7764 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0624 04:31:12.456131    7764 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0624 04:31:12.476513    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 04:31:12.525761    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 04:31:12.575029    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 04:31:12.621707    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 04:31:12.676117    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0624 04:31:12.732880    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 04:31:12.786222    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 04:31:12.836121    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-340000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0624 04:31:12.890184    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 04:31:12.939793    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 04:31:12.990002    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 04:31:13.037923    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0624 04:31:13.072699    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0624 04:31:13.108149    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0624 04:31:13.145718    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0624 04:31:13.179869    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0624 04:31:13.213206    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0624 04:31:13.246195    7764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0624 04:31:13.293043    7764 ssh_runner.go:195] Run: openssl version
	I0624 04:31:13.317584    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 04:31:13.350901    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.358812    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.371262    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 04:31:13.393846    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 04:31:13.431766    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 04:31:13.466814    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.474411    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.488241    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 04:31:13.510172    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 04:31:13.543032    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 04:31:13.575139    7764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 04:31:13.582643    7764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 04:31:13.594772    7764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 04:31:13.618725    7764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 04:31:13.652125    7764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 04:31:13.660020    7764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 04:31:13.660020    7764 kubeadm.go:928] updating node {m03 172.31.215.46 8443 v1.30.2 docker true true} ...
	I0624 04:31:13.660020    7764 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-340000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.215.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 04:31:13.660596    7764 kube-vip.go:115] generating kube-vip config ...
	I0624 04:31:13.673539    7764 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0624 04:31:13.699732    7764 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0624 04:31:13.699732    7764 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.31.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0624 04:31:13.712403    7764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 04:31:13.731001    7764 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0624 04:31:13.745744    7764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0624 04:31:13.763746    7764 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0624 04:31:13.763746    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:31:13.763746    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:31:13.778724    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 04:31:13.778724    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:31:13.780731    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 04:31:13.786132    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 04:31:13.786132    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0624 04:31:13.826217    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 04:31:13.826623    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0624 04:31:13.826307    7764 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:31:13.842204    7764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 04:31:13.884702    7764 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 04:31:13.885229    7764 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0624 04:31:15.237705    7764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0624 04:31:15.255599    7764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0624 04:31:15.301892    7764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 04:31:15.337163    7764 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0624 04:31:15.387395    7764 ssh_runner.go:195] Run: grep 172.31.223.254	control-plane.minikube.internal$ /etc/hosts
	I0624 04:31:15.394615    7764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 04:31:15.433768    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:31:15.648263    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:31:15.681406    7764 host.go:66] Checking if "ha-340000" exists ...
	I0624 04:31:15.682456    7764 start.go:316] joinCluster: &{Name:ha-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-340000 Namespace:default APIServerHAVIP:172.31.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.219.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.99 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 04:31:15.682677    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0624 04:31:15.682784    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-340000 ).state
	I0624 04:31:17.926278    7764 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 04:31:17.926422    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:17.926422    7764 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-340000 ).networkadapters[0]).ipaddresses[0]
	I0624 04:31:20.585612    7764 main.go:141] libmachine: [stdout =====>] : 172.31.219.170
	
	I0624 04:31:20.585612    7764 main.go:141] libmachine: [stderr =====>] : 
	I0624 04:31:20.586011    7764 sshutil.go:53] new ssh client: &{IP:172.31.219.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-340000\id_rsa Username:docker}
	I0624 04:31:20.819671    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1368265s)
	I0624 04:31:20.819773    7764 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:31:20.819773    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7l95v3.4djr7oozbpugwz2j --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m03 --control-plane --apiserver-advertise-address=172.31.215.46 --apiserver-bind-port=8443"
	I0624 04:32:07.594871    7764 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7l95v3.4djr7oozbpugwz2j --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-340000-m03 --control-plane --apiserver-advertise-address=172.31.215.46 --apiserver-bind-port=8443": (46.7749132s)
	I0624 04:32:07.594951    7764 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0624 04:32:08.455240    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-340000-m03 minikube.k8s.io/updated_at=2024_06_24T04_32_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=ha-340000 minikube.k8s.io/primary=false
	I0624 04:32:08.660551    7764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-340000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0624 04:32:08.820207    7764 start.go:318] duration metric: took 53.1375751s to joinCluster
	I0624 04:32:08.820207    7764 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.31.215.46 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 04:32:08.821271    7764 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 04:32:08.823330    7764 out.go:177] * Verifying Kubernetes components...
	I0624 04:32:08.839918    7764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 04:32:09.250611    7764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 04:32:09.295045    7764 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 04:32:09.295461    7764 kapi.go:59] client config for ha-340000: &rest.Config{Host:"https://172.31.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-340000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0624 04:32:09.295461    7764 kubeadm.go:477] Overriding stale ClientConfig host https://172.31.223.254:8443 with https://172.31.219.170:8443
	I0624 04:32:09.296693    7764 node_ready.go:35] waiting up to 6m0s for node "ha-340000-m03" to be "Ready" ...
	I0624 04:32:09.296890    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:09.296890    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:09.296890    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:09.296890    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:09.316809    7764 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0624 04:32:09.807209    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:09.807209    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:09.807325    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:09.807414    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:09.827889    7764 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0624 04:32:10.310982    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:10.310982    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:10.310982    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:10.311277    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:10.315501    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:10.811029    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:10.811029    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:10.811029    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:10.811029    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:10.816499    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:11.303784    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:11.303784    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:11.303784    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:11.303784    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:11.307440    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:11.308932    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:11.802698    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:11.802698    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:11.802698    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:11.802698    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:11.809189    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:32:12.311599    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:12.311599    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:12.311777    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:12.311777    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:12.316594    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:12.803988    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:12.803988    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:12.804304    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:12.804304    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:12.808853    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:13.309582    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:13.309582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:13.309582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:13.309582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:13.362409    7764 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0624 04:32:13.363366    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:13.811415    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:13.811415    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:13.811415    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:13.811632    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:13.818911    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:14.299909    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:14.299974    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:14.299974    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:14.300035    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:14.305244    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:14.798558    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:14.798732    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:14.798732    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:14.798732    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:14.809510    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:15.301559    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:15.301559    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:15.301559    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:15.301559    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:15.304177    7764 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 04:32:15.801642    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:15.801642    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:15.801741    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:15.801741    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:15.807700    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:15.808568    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:16.301725    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:16.301793    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:16.301793    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:16.301793    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:16.306521    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:16.806576    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:16.806576    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:16.806576    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:16.806576    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:16.812265    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:17.307565    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:17.307911    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:17.307911    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:17.307911    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:17.311249    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:17.803195    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:17.803236    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:17.803236    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:17.803236    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:17.829471    7764 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0624 04:32:17.830201    7764 node_ready.go:53] node "ha-340000-m03" has status "Ready":"False"
	I0624 04:32:18.302566    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:18.302660    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:18.302660    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:18.302704    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:18.307633    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:18.806367    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:18.806719    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:18.806719    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:18.806719    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:18.811372    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.306932    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:19.306932    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.306932    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.306932    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.312601    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:19.811196    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:19.811196    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.811196    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.811196    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.824720    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:32:19.826109    7764 node_ready.go:49] node "ha-340000-m03" has status "Ready":"True"
	I0624 04:32:19.826253    7764 node_ready.go:38] duration metric: took 10.5295184s for node "ha-340000-m03" to be "Ready" ...
	I0624 04:32:19.826253    7764 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:32:19.826393    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:19.826465    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.826465    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.826513    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.840491    7764 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 04:32:19.852526    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.852526    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6xxtk
	I0624 04:32:19.852526    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.852526    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.853077    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.856980    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.858402    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.858402    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.858402    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.858402    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.862234    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.863468    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.863598    7764 pod_ready.go:81] duration metric: took 11.0726ms for pod "coredns-7db6d8ff4d-6xxtk" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.863598    7764 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.863721    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6zh6m
	I0624 04:32:19.863721    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.863721    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.863721    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.868185    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.869945    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.869945    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.869945    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.869945    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.873539    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.874136    7764 pod_ready.go:92] pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.874136    7764 pod_ready.go:81] duration metric: took 10.5375ms for pod "coredns-7db6d8ff4d-6zh6m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.874136    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.874136    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000
	I0624 04:32:19.874136    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.874136    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.874136    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.878371    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:19.879517    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:19.879517    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.879517    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.879636    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.882883    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.883896    7764 pod_ready.go:92] pod "etcd-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.883896    7764 pod_ready.go:81] duration metric: took 9.7602ms for pod "etcd-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.883896    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.883896    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m02
	I0624 04:32:19.883896    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.883896    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.883896    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.886306    7764 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 04:32:19.887853    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:19.887912    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:19.887912    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:19.887912    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:19.891833    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:19.891916    7764 pod_ready.go:92] pod "etcd-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:19.891916    7764 pod_ready.go:81] duration metric: took 8.0198ms for pod "etcd-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:19.892474    7764 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.013781    7764 request.go:629] Waited for 121.1133ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m03
	I0624 04:32:20.014040    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-340000-m03
	I0624 04:32:20.014040    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.014040    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.014040    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.018498    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.215841    7764 request.go:629] Waited for 195.2549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:20.216135    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:20.216185    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.216185    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.216185    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.220391    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.223664    7764 pod_ready.go:92] pod "etcd-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:20.223664    7764 pod_ready.go:81] duration metric: took 331.1891ms for pod "etcd-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.223664    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.422921    7764 request.go:629] Waited for 199.2558ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:32:20.423121    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000
	I0624 04:32:20.423121    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.423121    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.423121    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.427710    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:20.624834    7764 request.go:629] Waited for 195.5905ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:20.625055    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:20.625055    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.625055    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.625186    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.630962    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:20.631629    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:20.631692    7764 pod_ready.go:81] duration metric: took 407.9626ms for pod "kube-apiserver-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.631692    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:20.813525    7764 request.go:629] Waited for 181.4943ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:32:20.813850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m02
	I0624 04:32:20.813850    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:20.813850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:20.813850    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:20.822295    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:21.019304    7764 request.go:629] Waited for 196.1691ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:21.019582    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:21.019582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.019582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.019582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.024165    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:21.025418    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.025418    7764 pod_ready.go:81] duration metric: took 393.7246ms for pod "kube-apiserver-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.025418    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.221744    7764 request.go:629] Waited for 196.325ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m03
	I0624 04:32:21.221850    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-340000-m03
	I0624 04:32:21.221850    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.221850    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.222036    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.227692    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:21.424989    7764 request.go:629] Waited for 195.5815ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:21.425129    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:21.425129    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.425129    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.425129    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.432859    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:21.433312    7764 pod_ready.go:92] pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.433312    7764 pod_ready.go:81] duration metric: took 407.8925ms for pod "kube-apiserver-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.433312    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.614461    7764 request.go:629] Waited for 180.962ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:32:21.614461    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000
	I0624 04:32:21.614461    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.614461    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.614461    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.619998    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:21.816887    7764 request.go:629] Waited for 195.2661ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:21.817439    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:21.817439    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:21.817439    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:21.817531    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:21.821254    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:21.822393    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:21.822519    7764 pod_ready.go:81] duration metric: took 389.2053ms for pod "kube-controller-manager-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:21.822519    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.021645    7764 request.go:629] Waited for 199.1257ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:32:22.021826    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m02
	I0624 04:32:22.021933    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.021933    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.021933    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.030013    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:22.225881    7764 request.go:629] Waited for 194.1134ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:22.226204    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:22.226204    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.226204    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.226204    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.230568    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:22.231984    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:22.232054    7764 pod_ready.go:81] duration metric: took 409.5341ms for pod "kube-controller-manager-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.232054    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.414699    7764 request.go:629] Waited for 182.3666ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m03
	I0624 04:32:22.414812    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-340000-m03
	I0624 04:32:22.414812    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.414812    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.414812    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.423299    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:22.617082    7764 request.go:629] Waited for 192.3023ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:22.617144    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:22.617144    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.617144    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.617144    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.624040    7764 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 04:32:22.625006    7764 pod_ready.go:92] pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:22.625006    7764 pod_ready.go:81] duration metric: took 392.95ms for pod "kube-controller-manager-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.625006    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:22.821218    7764 request.go:629] Waited for 196.0048ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:32:22.821342    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87bnm
	I0624 04:32:22.821342    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:22.821509    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:22.821509    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:22.826500    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:23.025313    7764 request.go:629] Waited for 197.089ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:23.025533    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:23.025533    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.025591    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.025591    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.030402    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:23.032250    7764 pod_ready.go:92] pod "kube-proxy-87bnm" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.032333    7764 pod_ready.go:81] duration metric: took 407.3257ms for pod "kube-proxy-87bnm" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.032333    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.215451    7764 request.go:629] Waited for 182.736ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:32:23.215586    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jktx8
	I0624 04:32:23.215586    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.215586    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.215586    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.220345    7764 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 04:32:23.420064    7764 request.go:629] Waited for 198.729ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:23.420201    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:23.420408    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.420408    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.420408    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.427849    7764 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 04:32:23.429690    7764 pod_ready.go:92] pod "kube-proxy-jktx8" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.429745    7764 pod_ready.go:81] duration metric: took 397.4104ms for pod "kube-proxy-jktx8" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.429745    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xkf7m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.611491    7764 request.go:629] Waited for 181.553ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkf7m
	I0624 04:32:23.611720    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkf7m
	I0624 04:32:23.611720    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.611720    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.611852    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.620711    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:23.826569    7764 request.go:629] Waited for 204.9426ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:23.826799    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:23.826910    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:23.826910    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:23.826910    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:23.832731    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:23.834081    7764 pod_ready.go:92] pod "kube-proxy-xkf7m" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:23.834219    7764 pod_ready.go:81] duration metric: took 404.4722ms for pod "kube-proxy-xkf7m" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:23.834219    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.014787    7764 request.go:629] Waited for 180.3453ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:32:24.014992    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000
	I0624 04:32:24.015104    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.015104    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.015104    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.019823    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.217322    7764 request.go:629] Waited for 195.5396ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:24.217497    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000
	I0624 04:32:24.217582    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.217582    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.217582    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.222070    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.223677    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:24.223677    7764 pod_ready.go:81] duration metric: took 389.4563ms for pod "kube-scheduler-ha-340000" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.223677    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.420403    7764 request.go:629] Waited for 196.5662ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:32:24.420519    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m02
	I0624 04:32:24.420519    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.420519    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.420519    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.425498    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:24.623946    7764 request.go:629] Waited for 197.1623ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:24.623946    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m02
	I0624 04:32:24.623946    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.623946    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.623946    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.632161    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:24.633035    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:24.633035    7764 pod_ready.go:81] duration metric: took 409.2471ms for pod "kube-scheduler-ha-340000-m02" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.633035    7764 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:24.826089    7764 request.go:629] Waited for 192.8603ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m03
	I0624 04:32:24.826215    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-340000-m03
	I0624 04:32:24.826215    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:24.826215    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:24.826307    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:24.830522    7764 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 04:32:25.014446    7764 request.go:629] Waited for 182.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:25.014672    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes/ha-340000-m03
	I0624 04:32:25.014723    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.014723    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.014802    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.023559    7764 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 04:32:25.024887    7764 pod_ready.go:92] pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace has status "Ready":"True"
	I0624 04:32:25.024887    7764 pod_ready.go:81] duration metric: took 391.8496ms for pod "kube-scheduler-ha-340000-m03" in "kube-system" namespace to be "Ready" ...
	I0624 04:32:25.024887    7764 pod_ready.go:38] duration metric: took 5.1985782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 04:32:25.025486    7764 api_server.go:52] waiting for apiserver process to appear ...
	I0624 04:32:25.042426    7764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 04:32:25.072189    7764 api_server.go:72] duration metric: took 16.2519179s to wait for apiserver process to appear ...
	I0624 04:32:25.072336    7764 api_server.go:88] waiting for apiserver healthz status ...
	I0624 04:32:25.072336    7764 api_server.go:253] Checking apiserver healthz at https://172.31.219.170:8443/healthz ...
	I0624 04:32:25.083072    7764 api_server.go:279] https://172.31.219.170:8443/healthz returned 200:
	ok
	I0624 04:32:25.083830    7764 round_trippers.go:463] GET https://172.31.219.170:8443/version
	I0624 04:32:25.083893    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.083893    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.083944    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.085121    7764 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 04:32:25.085121    7764 api_server.go:141] control plane version: v1.30.2
	I0624 04:32:25.085857    7764 api_server.go:131] duration metric: took 13.5208ms to wait for apiserver health ...
	I0624 04:32:25.085935    7764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 04:32:25.218283    7764 request.go:629] Waited for 132.0554ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.218560    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.218560    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.218560    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.218560    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.229110    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:25.239366    7764 system_pods.go:59] 24 kube-system pods found
	I0624 04:32:25.239366    7764 system_pods.go:61] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "etcd-ha-340000-m03" [c5f5b70a-588b-4114-9dd0-e3c4d90979f1] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-8mgnc" [4853ca7d-abd4-4536-b997-660eb300e8bf] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-apiserver-ha-340000-m03" [31532987-9531-4a44-9483-5027eee84cdc] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-controller-manager-ha-340000-m03" [26530110-2239-496e-889c-aa0bb05a2177] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-proxy-xkf7m" [c6f588e9-7459-4d98-a68a-3f0122f834b4] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-scheduler-ha-340000-m03" [b82baee9-7ec1-4fb1-91cd-460dacc55291] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:32:25.239366    7764 system_pods.go:61] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:32:25.239953    7764 system_pods.go:61] "kube-vip-ha-340000-m03" [fd2b4f66-bde4-42d8-8c22-dcedac5cadf0] Running
	I0624 04:32:25.239953    7764 system_pods.go:61] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:32:25.239953    7764 system_pods.go:74] duration metric: took 154.0168ms to wait for pod list to return data ...
	I0624 04:32:25.239953    7764 default_sa.go:34] waiting for default service account to be created ...
	I0624 04:32:25.421334    7764 request.go:629] Waited for 181.3463ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:32:25.421334    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/default/serviceaccounts
	I0624 04:32:25.421334    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.421334    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.421334    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.427249    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:25.427490    7764 default_sa.go:45] found service account: "default"
	I0624 04:32:25.427562    7764 default_sa.go:55] duration metric: took 187.6087ms for default service account to be created ...
	I0624 04:32:25.427617    7764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 04:32:25.612063    7764 request.go:629] Waited for 184.38ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.612234    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/namespaces/kube-system/pods
	I0624 04:32:25.612388    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.612791    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.612791    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.623518    7764 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 04:32:25.633399    7764 system_pods.go:86] 24 kube-system pods found
	I0624 04:32:25.633399    7764 system_pods.go:89] "coredns-7db6d8ff4d-6xxtk" [2cf090ee-ae41-4360-949b-053dd593da2e] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "coredns-7db6d8ff4d-6zh6m" [61d3c632-30f0-413c-9236-50f011df9ad8] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000" [eed275be-c2bf-4369-a131-21cf43c6aa86] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000-m02" [eb5f54ff-2383-4a15-ae0d-5e69427187d3] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "etcd-ha-340000-m03" [c5f5b70a-588b-4114-9dd0-e3c4d90979f1] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-8mgnc" [4853ca7d-abd4-4536-b997-660eb300e8bf] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-k4p7m" [26bb4fe9-a328-49f2-a362-460750e45cf0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kindnet-rmfdg" [07a1dad8-5c08-4d3e-8c49-a3dbc22350a6] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000" [bf9f946b-728d-4229-864d-8daca931ad28] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000-m02" [f26a1474-ec70-4110-a12a-a3671eb70220] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-apiserver-ha-340000-m03" [31532987-9531-4a44-9483-5027eee84cdc] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000" [e4bfaaa1-bb3b-4e66-a33f-c1fc6ecf59c5] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m02" [e9d3aeca-d224-4daa-ba10-b6afb92a044a] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-controller-manager-ha-340000-m03" [26530110-2239-496e-889c-aa0bb05a2177] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-87bnm" [ec280f9d-6b3d-4ddc-8d9e-ef6ad5b08059] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-jktx8" [f27823e0-3bad-4e2b-b0e8-58ecce0606ea] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-proxy-xkf7m" [c6f588e9-7459-4d98-a68a-3f0122f834b4] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000" [03276e14-e81a-4084-829a-6f5b79134a49] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000-m02" [5dd77380-ec5a-45e3-a7d5-76a51ebff97a] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-scheduler-ha-340000-m03" [b82baee9-7ec1-4fb1-91cd-460dacc55291] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000" [e8206db8-ecbb-4a85-9bec-246fac8e89b0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000-m02" [a6faaa20-a72b-4f15-8c9e-16fdd053dac0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "kube-vip-ha-340000-m03" [fd2b4f66-bde4-42d8-8c22-dcedac5cadf0] Running
	I0624 04:32:25.633399    7764 system_pods.go:89] "storage-provisioner" [42736a57-b903-4f63-a1bf-65e10d1b67aa] Running
	I0624 04:32:25.633399    7764 system_pods.go:126] duration metric: took 205.7805ms to wait for k8s-apps to be running ...
	I0624 04:32:25.633399    7764 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 04:32:25.644704    7764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 04:32:25.675184    7764 system_svc.go:56] duration metric: took 41.7854ms WaitForService to wait for kubelet
	I0624 04:32:25.675184    7764 kubeadm.go:576] duration metric: took 16.8549102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 04:32:25.675184    7764 node_conditions.go:102] verifying NodePressure condition ...
	I0624 04:32:25.819734    7764 request.go:629] Waited for 144.4176ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.219.170:8443/api/v1/nodes
	I0624 04:32:25.819813    7764 round_trippers.go:463] GET https://172.31.219.170:8443/api/v1/nodes
	I0624 04:32:25.819813    7764 round_trippers.go:469] Request Headers:
	I0624 04:32:25.819894    7764 round_trippers.go:473]     Accept: application/json, */*
	I0624 04:32:25.819954    7764 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 04:32:25.825722    7764 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 04:32:25.827400    7764 node_conditions.go:123] node cpu capacity is 2
	I0624 04:32:25.827400    7764 node_conditions.go:105] duration metric: took 152.2153ms to run NodePressure ...
	I0624 04:32:25.827400    7764 start.go:240] waiting for startup goroutines ...
	I0624 04:32:25.827400    7764 start.go:254] writing updated cluster config ...
	I0624 04:32:25.841759    7764 ssh_runner.go:195] Run: rm -f paused
	I0624 04:32:26.004929    7764 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0624 04:32:26.008221    7764 out.go:177] * Done! kubectl is now configured to use "ha-340000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0dab5dcd476f47a30e07c9a16098451d15147ab0d169a4ba10025d366cc49641/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/155582c2f095eaf00f2c023270663657207b1e1d75c73d7bc110ba03729eb826/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674001531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674294132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.674717233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.675511336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:24:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/833cea563c83c88c2aee77fd8ad46234843a25c0fbc228859bdc9dc7b77572c4/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993201406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993430007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993448707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:42 ha-340000 dockerd[1330]: time="2024-06-24T11:24:42.993749908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.092911226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093324728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093433329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:24:43 ha-340000 dockerd[1330]: time="2024-06-24T11:24:43.093609129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557299536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557395937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557411837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 dockerd[1330]: time="2024-06-24T11:33:05.557674639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:05 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:33:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b025a7a92eb76586e6a5922889948f4f0bc62eaae70f359f94dbdcba5eda220c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 24 11:33:07 ha-340000 cri-dockerd[1232]: time="2024-06-24T11:33:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389033270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389399471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389438171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 11:33:07 ha-340000 dockerd[1330]: time="2024-06-24T11:33:07.389716372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66537845ba76a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   b025a7a92eb76       busybox-fc5497c4f-mg7l6
	7a761577e341f       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   833cea563c83c       coredns-7db6d8ff4d-6xxtk
	cd348d4e5aabb       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   155582c2f095e       coredns-7db6d8ff4d-6zh6m
	d1ce6ad1d1c36       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   0dab5dcd476f4       storage-provisioner
	907fa20f2449c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   7485bf2f02157       kindnet-k4p7m
	a455e5d79591c       53c535741fb44                                                                                         26 minutes ago      Running             kube-proxy                0                   fb60bddb8bb5f       kube-proxy-jktx8
	846133f35b3bb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   da0ca313de317       kube-vip-ha-340000
	294520b11212a       e874818b3caac                                                                                         27 minutes ago      Running             kube-controller-manager   0                   f22dad9ab27ee       kube-controller-manager-ha-340000
	76c78b3ed83d9       7820c83aa1394                                                                                         27 minutes ago      Running             kube-scheduler            0                   107803efb04ae       kube-scheduler-ha-340000
	3d24fc713d0cd       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   917c33a433524       etcd-ha-340000
	d4dc3f4ed7f8b       56ce0fd9fb532                                                                                         27 minutes ago      Running             kube-apiserver            0                   b74d0615ee4a0       kube-apiserver-ha-340000
	
	
	==> coredns [7a761577e341] <==
	[INFO] 10.244.1.2:56437 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000657s
	[INFO] 10.244.2.2:50732 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148301s
	[INFO] 10.244.2.2:37925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.125148764s
	[INFO] 10.244.2.2:53136 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000326101s
	[INFO] 10.244.2.2:47141 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.034498028s
	[INFO] 10.244.2.2:49837 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121101s
	[INFO] 10.244.0.4:55762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001136s
	[INFO] 10.244.0.4:53102 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001016s
	[INFO] 10.244.0.4:45651 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000976s
	[INFO] 10.244.0.4:34355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128401s
	[INFO] 10.244.1.2:39172 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088s
	[INFO] 10.244.1.2:53752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	[INFO] 10.244.1.2:40644 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001866s
	[INFO] 10.244.2.2:57720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225001s
	[INFO] 10.244.2.2:47121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000685s
	[INFO] 10.244.2.2:33768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000672s
	[INFO] 10.244.0.4:50263 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079s
	[INFO] 10.244.0.4:56311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001967s
	[INFO] 10.244.1.2:46985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276701s
	[INFO] 10.244.1.2:58755 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000695s
	[INFO] 10.244.1.2:59285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000777s
	[INFO] 10.244.2.2:33498 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109801s
	[INFO] 10.244.0.4:60901 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001442s
	[INFO] 10.244.0.4:48052 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000230901s
	[INFO] 10.244.1.2:46845 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101201s
	
	
	==> coredns [cd348d4e5aab] <==
	[INFO] 10.244.1.2:54548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000901904s
	[INFO] 10.244.2.2:34605 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001461s
	[INFO] 10.244.2.2:45784 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001423s
	[INFO] 10.244.2.2:47857 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001039s
	[INFO] 10.244.0.4:51969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012272046s
	[INFO] 10.244.0.4:53245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188801s
	[INFO] 10.244.0.4:39298 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023027385s
	[INFO] 10.244.0.4:50860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000684s
	[INFO] 10.244.1.2:35217 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193301s
	[INFO] 10.244.1.2:43043 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000068s
	[INFO] 10.244.1.2:56637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076101s
	[INFO] 10.244.1.2:57783 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252001s
	[INFO] 10.244.1.2:41276 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000099501s
	[INFO] 10.244.2.2:52577 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085401s
	[INFO] 10.244.0.4:43320 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001947s
	[INFO] 10.244.0.4:47744 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130501s
	[INFO] 10.244.1.2:41866 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000749s
	[INFO] 10.244.2.2:55690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000300902s
	[INFO] 10.244.2.2:37854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158801s
	[INFO] 10.244.2.2:34018 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001014s
	[INFO] 10.244.0.4:44130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210901s
	[INFO] 10.244.0.4:53619 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131101s
	[INFO] 10.244.1.2:47636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000278202s
	[INFO] 10.244.1.2:40590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001534s
	[INFO] 10.244.1.2:51259 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000844s
	
	
	==> describe nodes <==
	Name:               ha-340000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T04_24_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:24:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:48:43 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:48:43 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:48:43 +0000   Mon, 24 Jun 2024 11:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:48:43 +0000   Mon, 24 Jun 2024 11:24:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.219.170
	  Hostname:    ha-340000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dc1d6ea23974cf3bc55999d63a14514
	  System UUID:                fa1eb7b0-0abc-5149-a08c-a27e05d5426a
	  Boot ID:                    a193d5a8-20d3-444f-b9d5-f391ed40c2ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mg7l6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-6xxtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-6zh6m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-340000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-k4p7m                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-340000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-340000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-jktx8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-340000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-340000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-340000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-340000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-340000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-340000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-340000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-340000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26m                node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-340000 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-340000 event: Registered Node ha-340000 in Controller
	
	
	Name:               ha-340000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T04_28_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:28:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:50:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 24 Jun 2024 11:49:03 +0000   Mon, 24 Jun 2024 11:50:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 24 Jun 2024 11:49:03 +0000   Mon, 24 Jun 2024 11:50:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 24 Jun 2024 11:49:03 +0000   Mon, 24 Jun 2024 11:50:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 24 Jun 2024 11:49:03 +0000   Mon, 24 Jun 2024 11:50:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.31.216.99
	  Hostname:    ha-340000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5aeac87c1e14cd39bb8892cd5382f7b
	  System UUID:                4f29bf5a-5b58-9940-b557-7ea78cd09aaa
	  Boot ID:                    0b07c24a-5e00-4ef1-b25b-a44b3f20cf09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rrqj8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-340000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-rmfdg                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-340000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-340000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-87bnm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-340000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-340000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  RegisteredNode           23m                node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-340000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-340000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-340000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-340000-m02 event: Registered Node ha-340000-m02 in Controller
	  Normal  NodeNotReady             37s                node-controller  Node ha-340000-m02 status is now: NodeNotReady
	
	
	Name:               ha-340000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T04_32_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:32:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:51:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:48:49 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:48:49 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:48:49 +0000   Mon, 24 Jun 2024 11:32:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:48:49 +0000   Mon, 24 Jun 2024 11:32:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.215.46
	  Hostname:    ha-340000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 73794d89abea4893be9ddfc306311730
	  System UUID:                b9d53c05-73eb-4c4b-9b21-878922a12b5a
	  Boot ID:                    9bebd6fa-8629-4a52-99a5-4216403a6bb4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lsn8j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-340000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-8mgnc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-340000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-340000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-xkf7m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-340000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-340000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-340000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-340000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-340000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-340000-m03 event: Registered Node ha-340000-m03 in Controller
	
	
	Name:               ha-340000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-340000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=ha-340000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T04_37_29_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 11:37:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-340000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 11:51:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 11:48:12 +0000   Mon, 24 Jun 2024 11:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 11:48:12 +0000   Mon, 24 Jun 2024 11:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 11:48:12 +0000   Mon, 24 Jun 2024 11:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 11:48:12 +0000   Mon, 24 Jun 2024 11:37:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.222.135
	  Hostname:    ha-340000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0467ac2f22d42848b15d72fb86208e3
	  System UUID:                90286497-809c-6242-bee9-37d2bbb9ab42
	  Boot ID:                    dce2ec77-9dab-4722-b9c4-f9246e01d3b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gnrlt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-pshdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-340000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-340000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-340000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-340000-m04 event: Registered Node ha-340000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-340000-m04 event: Registered Node ha-340000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-340000-m04 event: Registered Node ha-340000-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-340000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.728474] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.077611] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun24 11:23] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.191839] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.696754] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.101348] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.539188] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.189520] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.229470] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.774971] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.221863] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.195680] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.266531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[ +11.086877] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.104097] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.034404] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[Jun24 11:24] systemd-fstab-generator[1719]: Ignoring "noauto" option for root device
	[  +0.103839] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.759776] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.814579] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[ +15.286113] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.307080] kauditd_printk_skb: 29 callbacks suppressed
	[Jun24 11:28] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3d24fc713d0c] <==
	{"level":"warn","ts":"2024-06-24T11:51:24.465251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.473609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.478039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.495344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.514351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.523135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.525296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.530298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.535799Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.546282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.55579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.566193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.571702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.576768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.593696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.601379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.603796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.613967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.622002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.625252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.626282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.636989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.645353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.675323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-24T11:51:24.725546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"41766fee91dd9d05","from":"41766fee91dd9d05","remote-peer-id":"29becae01bc6f857","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:51:24 up 29 min,  0 users,  load average: 0.24, 0.64, 0.63
	Linux ha-340000 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [907fa20f2449] <==
	I0624 11:50:50.037399       1 main.go:250] Node ha-340000-m04 has CIDR [10.244.3.0/24] 
	I0624 11:51:00.053920       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:51:00.054009       1 main.go:227] handling current node
	I0624 11:51:00.059768       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:51:00.109764       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:51:00.110337       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:51:00.110536       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:51:00.110914       1 main.go:223] Handling node with IPs: map[172.31.222.135:{}]
	I0624 11:51:00.111027       1 main.go:250] Node ha-340000-m04 has CIDR [10.244.3.0/24] 
	I0624 11:51:10.125881       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:51:10.126037       1 main.go:227] handling current node
	I0624 11:51:10.126058       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:51:10.126117       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:51:10.126279       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:51:10.126319       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:51:10.126448       1 main.go:223] Handling node with IPs: map[172.31.222.135:{}]
	I0624 11:51:10.126534       1 main.go:250] Node ha-340000-m04 has CIDR [10.244.3.0/24] 
	I0624 11:51:20.140141       1 main.go:223] Handling node with IPs: map[172.31.219.170:{}]
	I0624 11:51:20.140248       1 main.go:227] handling current node
	I0624 11:51:20.140264       1 main.go:223] Handling node with IPs: map[172.31.216.99:{}]
	I0624 11:51:20.140272       1 main.go:250] Node ha-340000-m02 has CIDR [10.244.1.0/24] 
	I0624 11:51:20.140659       1 main.go:223] Handling node with IPs: map[172.31.215.46:{}]
	I0624 11:51:20.140764       1 main.go:250] Node ha-340000-m03 has CIDR [10.244.2.0/24] 
	I0624 11:51:20.141035       1 main.go:223] Handling node with IPs: map[172.31.222.135:{}]
	I0624 11:51:20.141052       1 main.go:250] Node ha-340000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d4dc3f4ed7f8] <==
	I0624 11:24:12.613288       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0624 11:24:12.630403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.219.170]
	I0624 11:24:12.631496       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 11:24:12.642974       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 11:24:13.211603       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 11:24:14.557259       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 11:24:14.584316       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0624 11:24:14.609616       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 11:24:27.297559       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0624 11:24:27.419436       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0624 11:33:12.629777       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63056: use of closed network connection
	E0624 11:33:14.142329       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63058: use of closed network connection
	E0624 11:33:14.598550       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63060: use of closed network connection
	E0624 11:33:15.202478       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63062: use of closed network connection
	E0624 11:33:15.691173       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63064: use of closed network connection
	E0624 11:33:16.156634       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63066: use of closed network connection
	E0624 11:33:16.599773       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63068: use of closed network connection
	E0624 11:33:17.052178       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63070: use of closed network connection
	E0624 11:33:17.486498       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63072: use of closed network connection
	E0624 11:33:18.262869       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63075: use of closed network connection
	E0624 11:33:28.714962       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63077: use of closed network connection
	E0624 11:33:29.157751       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63080: use of closed network connection
	E0624 11:33:39.608619       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63082: use of closed network connection
	E0624 11:33:40.048803       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63084: use of closed network connection
	E0624 11:33:50.491713       1 conn.go:339] Error on socket receive: read tcp 172.31.223.254:8443->172.31.208.1:63086: use of closed network connection
	
	
	==> kube-controller-manager [294520b11212] <==
	E0624 11:33:05.255196       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:05.556417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="301.10805ms"
	E0624 11:33:05.556471       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:05.556546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.3µs"
	I0624 11:33:05.566665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.801µs"
	I0624 11:33:06.087277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.7µs"
	I0624 11:33:06.113340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.2µs"
	I0624 11:33:06.134170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36µs"
	I0624 11:33:07.090587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.3µs"
	I0624 11:33:07.154959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.9µs"
	I0624 11:33:07.255605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.901µs"
	I0624 11:33:07.982807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.626422ms"
	E0624 11:33:07.982866       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:33:07.983166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.201µs"
	I0624 11:33:07.988686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.2µs"
	I0624 11:33:10.171782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.184366ms"
	I0624 11:33:10.172410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="559.202µs"
	E0624 11:37:28.212822       1 certificate_controller.go:146] Sync csr-vtfjf failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-vtfjf": the object has been modified; please apply your changes to the latest version and try again
	I0624 11:37:28.304891       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-340000-m04\" does not exist"
	I0624 11:37:28.337605       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-340000-m04" podCIDRs=["10.244.3.0/24"]
	I0624 11:37:31.917235       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-340000-m04"
	I0624 11:37:51.825315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-340000-m04"
	I0624 11:50:47.129964       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-340000-m04"
	I0624 11:50:47.398116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.478601ms"
	I0624 11:50:47.398577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.7µs"
	
	
	==> kube-proxy [a455e5d79591] <==
	I0624 11:24:30.038976       1 server_linux.go:69] "Using iptables proxy"
	I0624 11:24:30.073333       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.219.170"]
	I0624 11:24:30.226639       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 11:24:30.226783       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 11:24:30.226808       1 server_linux.go:165] "Using iptables Proxier"
	I0624 11:24:30.231323       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 11:24:30.231875       1 server.go:872] "Version info" version="v1.30.2"
	I0624 11:24:30.232064       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 11:24:30.233934       1 config.go:192] "Starting service config controller"
	I0624 11:24:30.234316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 11:24:30.234538       1 config.go:101] "Starting endpoint slice config controller"
	I0624 11:24:30.235010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 11:24:30.236029       1 config.go:319] "Starting node config controller"
	I0624 11:24:30.236427       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 11:24:30.334959       1 shared_informer.go:320] Caches are synced for service config
	I0624 11:24:30.336429       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 11:24:30.336894       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76c78b3ed83d] <==
	W0624 11:24:11.328854       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 11:24:11.328913       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0624 11:24:11.349387       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0624 11:24:11.349520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0624 11:24:11.419840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0624 11:24:11.420754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0624 11:24:11.421144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 11:24:11.421246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 11:24:11.458218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 11:24:11.458286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 11:24:11.556592       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0624 11:24:11.556731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0624 11:24:11.556808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0624 11:24:11.556849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0624 11:24:11.571252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0624 11:24:11.571280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0624 11:24:11.590878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0624 11:24:11.591210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0624 11:24:11.794875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0624 11:24:11.796681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 11:24:14.162430       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0624 11:33:04.472945       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lsn8j\": pod busybox-fc5497c4f-lsn8j is already assigned to node \"ha-340000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lsn8j" node="ha-340000-m03"
	E0624 11:33:04.477564       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f271f626-6a96-4a53-8b97-32e461250473(default/busybox-fc5497c4f-lsn8j) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lsn8j"
	E0624 11:33:04.477674       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lsn8j\": pod busybox-fc5497c4f-lsn8j is already assigned to node \"ha-340000-m03\"" pod="default/busybox-fc5497c4f-lsn8j"
	I0624 11:33:04.477714       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lsn8j" node="ha-340000-m03"
	
	
	==> kubelet <==
	Jun 24 11:47:14 ha-340000 kubelet[2212]: E0624 11:47:14.710055    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:47:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:47:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:47:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:47:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:48:14 ha-340000 kubelet[2212]: E0624 11:48:14.708286    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:48:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:48:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:48:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:48:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:49:14 ha-340000 kubelet[2212]: E0624 11:49:14.708283    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:49:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:49:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:49:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:49:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:50:14 ha-340000 kubelet[2212]: E0624 11:50:14.705201    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:50:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:50:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:50:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:50:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 11:51:14 ha-340000 kubelet[2212]: E0624 11:51:14.710693    2212 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 11:51:14 ha-340000 kubelet[2212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 11:51:14 ha-340000 kubelet[2212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 11:51:14 ha-340000 kubelet[2212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 11:51:14 ha-340000 kubelet[2212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 04:51:16.563640    2788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-340000 -n ha-340000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-340000 -n ha-340000: (12.2589785s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-340000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (104.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (190.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-607600
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-607600: exit status 90 (2m58.2341819s)

                                                
                                                
-- stdout --
	* [mount-start-2-607600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-607600
	* Restarting existing hyperv VM for "mount-start-2-607600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:19:05.624219   10300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 24 12:20:34 mount-start-2-607600 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 12:20:34 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:34.654674324Z" level=info msg="Starting up"
	Jun 24 12:20:34 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:34.655977597Z" level=info msg="containerd not running, starting managed containerd"
	Jun 24 12:20:34 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:34.659774620Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.695473089Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.729612289Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.729782686Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.729927883Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.729956582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730720067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730742466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730948462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730967462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730980661Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.730992161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.731466751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.732231836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.735753064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.735880661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.736111156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.736185055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.736673245Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.736786443Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.736828542Z" level=info msg="metadata content store policy set" policy=shared
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.739124395Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.739247492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.739270892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.739292791Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.740476867Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.740710062Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741231252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741409448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741507246Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741583244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741637543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741705242Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.741834439Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742023135Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742049135Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742126633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742216531Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742234831Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742258430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742275930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742290830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742306330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742321229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742337229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742351329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742366128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742381328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742398928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742412627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742431627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742447727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742465926Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742490226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742505925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742519725Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742603523Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742678722Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742694322Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742709521Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742720721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742735221Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.742746421Z" level=info msg="NRI interface is disabled by configuration."
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.743127313Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.743392807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.743506705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 24 12:20:34 mount-start-2-607600 dockerd[665]: time="2024-06-24T12:20:34.743550304Z" level=info msg="containerd successfully booted in 0.051078s"
	Jun 24 12:20:35 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:35.715111620Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 24 12:20:35 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:35.756211631Z" level=info msg="Loading containers: start."
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.003570337Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.088923578Z" level=info msg="Loading containers: done."
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.115900748Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.116415156Z" level=info msg="Daemon has completed initialization"
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.173944848Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 24 12:20:36 mount-start-2-607600 systemd[1]: Started Docker Application Container Engine.
	Jun 24 12:20:36 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:20:36.176022585Z" level=info msg="API listen on [::]:2376"
	Jun 24 12:21:02 mount-start-2-607600 systemd[1]: Stopping Docker Application Container Engine...
	Jun 24 12:21:02 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:21:02.541207539Z" level=info msg="Processing signal 'terminated'"
	Jun 24 12:21:02 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:21:02.542962800Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 24 12:21:02 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:21:02.543538020Z" level=info msg="Daemon shutdown complete"
	Jun 24 12:21:02 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:21:02.543579121Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 24 12:21:02 mount-start-2-607600 dockerd[659]: time="2024-06-24T12:21:02.543586821Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 24 12:21:03 mount-start-2-607600 systemd[1]: docker.service: Deactivated successfully.
	Jun 24 12:21:03 mount-start-2-607600 systemd[1]: Stopped Docker Application Container Engine.
	Jun 24 12:21:03 mount-start-2-607600 systemd[1]: Starting Docker Application Container Engine...
	Jun 24 12:21:03 mount-start-2-607600 dockerd[1035]: time="2024-06-24T12:21:03.618534958Z" level=info msg="Starting up"
	Jun 24 12:22:03 mount-start-2-607600 dockerd[1035]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 24 12:22:03 mount-start-2-607600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 24 12:22:03 mount-start-2-607600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 24 12:22:03 mount-start-2-607600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-607600" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-607600 -n mount-start-2-607600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-607600 -n mount-start-2-607600: exit status 6 (12.0939701s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:22:03.875662    6788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 05:22:15.790783    6788 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-607600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-607600" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (190.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- sh -c "ping -c 1 172.31.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.4208582s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:30:35.755282    3884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.31.208.1) from pod (busybox-fc5497c4f-ddhfw): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- sh -c "ping -c 1 172.31.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- sh -c "ping -c 1 172.31.208.1": exit status 1 (10.3838181s)

                                                
                                                
-- stdout --
	PING 172.31.208.1 (172.31.208.1): 56 data bytes
	
	--- 172.31.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:30:46.597207    4256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.31.208.1) from pod (busybox-fc5497c4f-vqhsz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-876600 -n multinode-876600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-876600 -n multinode-876600: (11.9767101s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 logs -n 25: (8.2674589s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-607600                           | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:15 PDT | 24 Jun 24 05:17 PDT |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:17 PDT |                     |
	|         | --profile mount-start-2-607600 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-607600 ssh -- ls                    | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:17 PDT | 24 Jun 24 05:17 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-607600                           | mount-start-1-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:17 PDT | 24 Jun 24 05:18 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-607600 ssh -- ls                    | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:18 PDT | 24 Jun 24 05:18 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-607600                           | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:18 PDT | 24 Jun 24 05:19 PDT |
	| start   | -p mount-start-2-607600                           | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:19 PDT |                     |
	| delete  | -p mount-start-2-607600                           | mount-start-2-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:22 PDT | 24 Jun 24 05:23 PDT |
	| delete  | -p mount-start-1-607600                           | mount-start-1-607600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:23 PDT | 24 Jun 24 05:23 PDT |
	| start   | -p multinode-876600                               | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:23 PDT | 24 Jun 24 05:30 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- apply -f                   | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- rollout                    | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- get pods -o                | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- get pods -o                | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-ddhfw --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-vqhsz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-ddhfw --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-vqhsz --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-ddhfw -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-vqhsz -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- get pods -o                | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-ddhfw                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT |                     |
	|         | busybox-fc5497c4f-ddhfw -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT | 24 Jun 24 05:30 PDT |
	|         | busybox-fc5497c4f-vqhsz                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-876600 -- exec                       | multinode-876600     | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:30 PDT |                     |
	|         | busybox-fc5497c4f-vqhsz -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.31.208.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 05:23:19
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 05:23:19.190301    6684 out.go:291] Setting OutFile to fd 632 ...
	I0624 05:23:19.190950    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:23:19.190950    6684 out.go:304] Setting ErrFile to fd 664...
	I0624 05:23:19.190950    6684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:23:19.214704    6684 out.go:298] Setting JSON to false
	I0624 05:23:19.223411    6684 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22254,"bootTime":1719209544,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 05:23:19.223411    6684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 05:23:19.229589    6684 out.go:177] * [multinode-876600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 05:23:19.233199    6684 notify.go:220] Checking for updates...
	I0624 05:23:19.236292    6684 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:23:19.238896    6684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 05:23:19.241666    6684 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 05:23:19.244733    6684 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 05:23:19.247399    6684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 05:23:19.251994    6684 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:23:19.252530    6684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 05:23:24.660295    6684 out.go:177] * Using the hyperv driver based on user configuration
	I0624 05:23:24.667541    6684 start.go:297] selected driver: hyperv
	I0624 05:23:24.667541    6684 start.go:901] validating driver "hyperv" against <nil>
	I0624 05:23:24.667846    6684 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 05:23:24.716406    6684 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 05:23:24.717400    6684 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:23:24.717400    6684 cni.go:84] Creating CNI manager for ""
	I0624 05:23:24.717400    6684 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0624 05:23:24.717400    6684 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0624 05:23:24.717903    6684 start.go:340] cluster config:
	{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:23:24.717903    6684 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 05:23:24.724617    6684 out.go:177] * Starting "multinode-876600" primary control-plane node in "multinode-876600" cluster
	I0624 05:23:24.729472    6684 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:23:24.730704    6684 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 05:23:24.730704    6684 cache.go:56] Caching tarball of preloaded images
	I0624 05:23:24.731027    6684 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:23:24.731354    6684 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:23:24.731666    6684 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:23:24.731854    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json: {Name:mk3586c75bd43261236171b7655f865c36187532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:23:24.733111    6684 start.go:360] acquireMachinesLock for multinode-876600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:23:24.733111    6684 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-876600"
	I0624 05:23:24.733642    6684 start.go:93] Provisioning new machine with config: &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 05:23:24.733709    6684 start.go:125] createHost starting for "" (driver="hyperv")
	I0624 05:23:24.738388    6684 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 05:23:24.738443    6684 start.go:159] libmachine.API.Create for "multinode-876600" (driver="hyperv")
	I0624 05:23:24.738443    6684 client.go:168] LocalClient.Create starting
	I0624 05:23:24.739065    6684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 05:23:24.739065    6684 main.go:141] libmachine: Decoding PEM data...
	I0624 05:23:24.739595    6684 main.go:141] libmachine: Parsing certificate...
	I0624 05:23:24.739682    6684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 05:23:24.739682    6684 main.go:141] libmachine: Decoding PEM data...
	I0624 05:23:24.739682    6684 main.go:141] libmachine: Parsing certificate...
	I0624 05:23:24.739682    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 05:23:26.866831    6684 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 05:23:26.867671    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:26.867765    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 05:23:28.616128    6684 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 05:23:28.616128    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:28.616128    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 05:23:30.162258    6684 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 05:23:30.162258    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:30.162843    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 05:23:33.812939    6684 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 05:23:33.813666    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:33.816830    6684 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 05:23:34.342371    6684 main.go:141] libmachine: Creating SSH key...
	I0624 05:23:34.671355    6684 main.go:141] libmachine: Creating VM...
	I0624 05:23:34.671517    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 05:23:37.594352    6684 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 05:23:37.594352    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:37.595350    6684 main.go:141] libmachine: Using switch "Default Switch"
	I0624 05:23:37.595444    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 05:23:39.384579    6684 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 05:23:39.384810    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:39.384870    6684 main.go:141] libmachine: Creating VHD
	I0624 05:23:39.384870    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 05:23:43.201031    6684 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3B25DF9A-23A4-4E06-9730-2A428F2CAEC4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 05:23:43.201784    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:43.201875    6684 main.go:141] libmachine: Writing magic tar header
	I0624 05:23:43.201957    6684 main.go:141] libmachine: Writing SSH key tar header
	I0624 05:23:43.210863    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 05:23:46.436265    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:23:46.437063    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:46.437143    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\disk.vhd' -SizeBytes 20000MB
	I0624 05:23:49.020476    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:23:49.020611    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:49.020677    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-876600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 05:23:52.622564    6684 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-876600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 05:23:52.622564    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:52.622564    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-876600 -DynamicMemoryEnabled $false
	I0624 05:23:54.908398    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:23:54.908398    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:54.908398    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-876600 -Count 2
	I0624 05:23:57.131608    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:23:57.131949    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:57.131949    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-876600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\boot2docker.iso'
	I0624 05:23:59.800927    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:23:59.800927    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:23:59.801234    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-876600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\disk.vhd'
	I0624 05:24:02.511725    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:02.512678    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:02.512678    6684 main.go:141] libmachine: Starting VM...
	I0624 05:24:02.512678    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600
	I0624 05:24:05.664252    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:05.664252    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:05.664252    6684 main.go:141] libmachine: Waiting for host to start...
	I0624 05:24:05.664252    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:07.988623    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:07.988790    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:07.988790    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:10.576459    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:10.576459    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:11.585525    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:13.840363    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:13.841438    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:13.841542    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:16.447778    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:16.448004    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:17.449924    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:19.701022    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:19.701022    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:19.701937    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:22.274615    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:22.274615    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:23.281477    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:25.524196    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:25.524196    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:25.525238    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:28.076555    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:24:28.076555    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:29.084408    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:31.320514    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:31.320514    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:31.320514    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:33.950870    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:24:33.950932    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:33.950932    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:36.146376    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:36.146648    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:36.146648    6684 machine.go:94] provisionDockerMachine start ...
	I0624 05:24:36.146903    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:38.328685    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:38.328685    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:38.328685    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:40.885985    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:24:40.886016    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:40.892563    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:24:40.903405    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:24:40.903405    6684 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:24:41.041069    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:24:41.041161    6684 buildroot.go:166] provisioning hostname "multinode-876600"
	I0624 05:24:41.041161    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:43.219059    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:43.219272    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:43.219272    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:45.795579    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:24:45.795772    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:45.801014    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:24:45.801507    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:24:45.801507    6684 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600 && echo "multinode-876600" | sudo tee /etc/hostname
	I0624 05:24:45.961278    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600
	
	I0624 05:24:45.961278    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:48.126819    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:48.126819    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:48.127270    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:50.673493    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:24:50.673493    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:50.680230    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:24:50.680230    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:24:50.680825    6684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:24:50.834408    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:24:50.834408    6684 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:24:50.834408    6684 buildroot.go:174] setting up certificates
	I0624 05:24:50.834408    6684 provision.go:84] configureAuth start
	I0624 05:24:50.834408    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:52.978035    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:52.978128    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:52.978128    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:24:55.567359    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:24:55.567921    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:55.567983    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:24:57.739964    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:24:57.739964    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:24:57.740173    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:00.303906    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:00.304901    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:00.304901    6684 provision.go:143] copyHostCerts
	I0624 05:25:00.304901    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:25:00.305370    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:25:00.305370    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:25:00.305949    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:25:00.307226    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:25:00.307504    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:25:00.307504    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:25:00.307918    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:25:00.308846    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:25:00.308970    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:25:00.308970    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:25:00.308970    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:25:00.310172    6684 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600 san=[127.0.0.1 172.31.211.219 localhost minikube multinode-876600]
	I0624 05:25:00.557457    6684 provision.go:177] copyRemoteCerts
	I0624 05:25:00.574457    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:25:00.574457    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:02.662310    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:02.662310    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:02.663015    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:05.165686    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:05.165686    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:05.165686    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:25:05.284174    6684 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7096993s)
	I0624 05:25:05.284367    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:25:05.284941    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:25:05.331810    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:25:05.331810    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0624 05:25:05.386797    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:25:05.387357    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 05:25:05.432271    6684 provision.go:87] duration metric: took 14.5978073s to configureAuth
	I0624 05:25:05.432393    6684 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:25:05.432961    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:25:05.433061    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:07.570436    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:07.570657    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:07.570767    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:10.108874    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:10.108874    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:10.114729    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:25:10.115404    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:25:10.115404    6684 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:25:10.249015    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:25:10.249015    6684 buildroot.go:70] root file system type: tmpfs
	I0624 05:25:10.249630    6684 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:25:10.249630    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:12.416762    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:12.416762    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:12.416860    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:14.951248    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:14.951248    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:14.959088    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:25:14.959088    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:25:14.959088    6684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:25:15.120968    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:25:15.120968    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:17.248784    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:17.249437    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:17.249558    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:19.847536    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:19.847595    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:19.852258    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:25:19.853049    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:25:19.853198    6684 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:25:21.991633    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:25:21.991633    6684 machine.go:97] duration metric: took 45.8448103s to provisionDockerMachine
	I0624 05:25:21.991633    6684 client.go:171] duration metric: took 1m57.2527427s to LocalClient.Create
	I0624 05:25:21.991633    6684 start.go:167] duration metric: took 1m57.2527427s to libmachine.API.Create "multinode-876600"
	I0624 05:25:21.991633    6684 start.go:293] postStartSetup for "multinode-876600" (driver="hyperv")
	I0624 05:25:21.991633    6684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:25:22.004065    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:25:22.004065    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:24.136127    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:24.136216    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:24.136301    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:26.705719    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:26.706608    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:26.706803    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:25:26.818516    6684 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8143723s)
	I0624 05:25:26.831296    6684 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:25:26.838127    6684 command_runner.go:130] > NAME=Buildroot
	I0624 05:25:26.838558    6684 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:25:26.838558    6684 command_runner.go:130] > ID=buildroot
	I0624 05:25:26.838558    6684 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:25:26.838558    6684 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:25:26.838558    6684 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:25:26.838723    6684 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:25:26.839284    6684 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:25:26.840562    6684 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:25:26.840562    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:25:26.852000    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:25:26.869838    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:25:26.913275    6684 start.go:296] duration metric: took 4.9216242s for postStartSetup
	I0624 05:25:26.916217    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:29.100917    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:29.101730    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:29.101854    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:31.664481    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:31.664481    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:31.665557    6684 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:25:31.668651    6684 start.go:128] duration metric: took 2m6.9344578s to createHost
	I0624 05:25:31.668651    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:33.792827    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:33.793888    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:33.793923    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:36.400590    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:36.400590    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:36.407387    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:25:36.408200    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:25:36.408200    6684 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 05:25:36.534831    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719231936.539438984
	
	I0624 05:25:36.534975    6684 fix.go:216] guest clock: 1719231936.539438984
	I0624 05:25:36.534975    6684 fix.go:229] Guest: 2024-06-24 05:25:36.539438984 -0700 PDT Remote: 2024-06-24 05:25:31.6686516 -0700 PDT m=+132.566745601 (delta=4.870787384s)
	I0624 05:25:36.535095    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:38.639458    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:38.639458    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:38.639458    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:41.220418    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:41.220418    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:41.227082    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:25:41.227616    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.211.219 22 <nil> <nil>}
	I0624 05:25:41.227819    6684 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719231936
	I0624 05:25:41.393553    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:25:36 UTC 2024
	
	I0624 05:25:41.393737    6684 fix.go:236] clock set: Mon Jun 24 12:25:36 UTC 2024
	 (err=<nil>)
	I0624 05:25:41.393737    6684 start.go:83] releasing machines lock for "multinode-876600", held for 2m16.6601045s
	I0624 05:25:41.393884    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:43.601093    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:43.601093    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:43.602102    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:46.243771    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:46.243771    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:46.248213    6684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:25:46.248213    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:46.259075    6684 ssh_runner.go:195] Run: cat /version.json
	I0624 05:25:46.259075    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:25:48.536252    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:48.536501    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:48.536501    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:48.536501    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:25:48.536746    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:48.536807    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:25:51.248706    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:51.248935    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:51.249044    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:25:51.277557    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:25:51.277557    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:25:51.277557    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:25:51.351906    6684 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 05:25:51.352180    6684 ssh_runner.go:235] Completed: cat /version.json: (5.0930265s)
	I0624 05:25:51.365215    6684 ssh_runner.go:195] Run: systemctl --version
	I0624 05:25:51.449863    6684 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:25:51.449863    6684 command_runner.go:130] > systemd 252 (252)
	I0624 05:25:51.449863    6684 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2016299s)
	I0624 05:25:51.449863    6684 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 05:25:51.461966    6684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:25:51.471008    6684 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 05:25:51.471340    6684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:25:51.483923    6684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:25:51.510792    6684 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:25:51.511372    6684 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:25:51.511458    6684 start.go:494] detecting cgroup driver to use...
	I0624 05:25:51.511662    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:25:51.546932    6684 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:25:51.558303    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:25:51.587971    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:25:51.607928    6684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:25:51.621363    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:25:51.650245    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:25:51.681034    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:25:51.713964    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:25:51.748350    6684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:25:51.779870    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:25:51.810228    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:25:51.838807    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:25:51.867968    6684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:25:51.888794    6684 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:25:51.900516    6684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:25:51.930040    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:25:52.122600    6684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:25:52.153131    6684 start.go:494] detecting cgroup driver to use...
	I0624 05:25:52.164748    6684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:25:52.185740    6684 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:25:52.186347    6684 command_runner.go:130] > [Unit]
	I0624 05:25:52.186347    6684 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:25:52.186347    6684 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:25:52.186347    6684 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:25:52.186432    6684 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:25:52.186432    6684 command_runner.go:130] > StartLimitBurst=3
	I0624 05:25:52.186432    6684 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:25:52.186432    6684 command_runner.go:130] > [Service]
	I0624 05:25:52.186432    6684 command_runner.go:130] > Type=notify
	I0624 05:25:52.186432    6684 command_runner.go:130] > Restart=on-failure
	I0624 05:25:52.186432    6684 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:25:52.186432    6684 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:25:52.186540    6684 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:25:52.186540    6684 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:25:52.186540    6684 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:25:52.186540    6684 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:25:52.186620    6684 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:25:52.186643    6684 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:25:52.186703    6684 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:25:52.186703    6684 command_runner.go:130] > ExecStart=
	I0624 05:25:52.186752    6684 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:25:52.186752    6684 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:25:52.186752    6684 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:25:52.186752    6684 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:25:52.186752    6684 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:25:52.186827    6684 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:25:52.186827    6684 command_runner.go:130] > LimitCORE=infinity
	I0624 05:25:52.186827    6684 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:25:52.186853    6684 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:25:52.186853    6684 command_runner.go:130] > TasksMax=infinity
	I0624 05:25:52.186853    6684 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:25:52.186853    6684 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:25:52.186853    6684 command_runner.go:130] > Delegate=yes
	I0624 05:25:52.186853    6684 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:25:52.186853    6684 command_runner.go:130] > KillMode=process
	I0624 05:25:52.186853    6684 command_runner.go:130] > [Install]
	I0624 05:25:52.186853    6684 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:25:52.197497    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:25:52.228254    6684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:25:52.270181    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:25:52.302006    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:25:52.336783    6684 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:25:52.395081    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:25:52.419272    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:25:52.451184    6684 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:25:52.463673    6684 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:25:52.469010    6684 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:25:52.480902    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:25:52.497548    6684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:25:52.538538    6684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:25:52.731529    6684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:25:52.922125    6684 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:25:52.922535    6684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:25:52.965064    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:25:53.151845    6684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:25:55.657491    6684 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5056361s)
	I0624 05:25:55.670583    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:25:55.705510    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:25:55.738935    6684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:25:55.931059    6684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:25:56.133073    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:25:56.353573    6684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:25:56.401843    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:25:56.438613    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:25:56.661918    6684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:25:56.780785    6684 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:25:56.794604    6684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:25:56.804346    6684 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:25:56.804346    6684 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:25:56.804346    6684 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0624 05:25:56.804346    6684 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:25:56.804449    6684 command_runner.go:130] > Access: 2024-06-24 12:25:56.695177870 +0000
	I0624 05:25:56.804449    6684 command_runner.go:130] > Modify: 2024-06-24 12:25:56.695177870 +0000
	I0624 05:25:56.804449    6684 command_runner.go:130] > Change: 2024-06-24 12:25:56.700177881 +0000
	I0624 05:25:56.804449    6684 command_runner.go:130] >  Birth: -
	I0624 05:25:56.804542    6684 start.go:562] Will wait 60s for crictl version
	I0624 05:25:56.816499    6684 ssh_runner.go:195] Run: which crictl
	I0624 05:25:56.823488    6684 command_runner.go:130] > /usr/bin/crictl
	I0624 05:25:56.835908    6684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:25:56.892993    6684 command_runner.go:130] > Version:  0.1.0
	I0624 05:25:56.893672    6684 command_runner.go:130] > RuntimeName:  docker
	I0624 05:25:56.893672    6684 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:25:56.893672    6684 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:25:56.893738    6684 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:25:56.905150    6684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:25:56.937365    6684 command_runner.go:130] > 26.1.4
	I0624 05:25:56.947838    6684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:25:56.978735    6684 command_runner.go:130] > 26.1.4
	I0624 05:25:56.984476    6684 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:25:56.984706    6684 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:25:56.989519    6684 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:25:56.989519    6684 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:25:56.989519    6684 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:25:56.989519    6684 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:25:56.992745    6684 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:25:56.992745    6684 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:25:57.004761    6684 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:25:57.009875    6684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:25:57.032271    6684 kubeadm.go:877] updating cluster {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 05:25:57.032482    6684 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:25:57.042378    6684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:25:57.067323    6684 docker.go:685] Got preloaded images: 
	I0624 05:25:57.067323    6684 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0624 05:25:57.080650    6684 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 05:25:57.098692    6684 command_runner.go:139] > {"Repositories":{}}
	I0624 05:25:57.111636    6684 ssh_runner.go:195] Run: which lz4
	I0624 05:25:57.116618    6684 command_runner.go:130] > /usr/bin/lz4
	I0624 05:25:57.117651    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0624 05:25:57.129462    6684 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0624 05:25:57.136340    6684 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 05:25:57.136506    6684 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0624 05:25:57.136506    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0624 05:25:59.402409    6684 docker.go:649] duration metric: took 2.284551s to copy over tarball
	I0624 05:25:59.415628    6684 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0624 05:26:07.841403    6684 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4250351s)
	I0624 05:26:07.841449    6684 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0624 05:26:07.915337    6684 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0624 05:26:07.938329    6684 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0624 05:26:07.938623    6684 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0624 05:26:07.984679    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:26:08.208414    6684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:26:11.802693    6684 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.594227s)
	I0624 05:26:11.812502    6684 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0624 05:26:11.838757    6684 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0624 05:26:11.838757    6684 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:26:11.839790    6684 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0624 05:26:11.839883    6684 cache_images.go:84] Images are preloaded, skipping loading
	I0624 05:26:11.839883    6684 kubeadm.go:928] updating node { 172.31.211.219 8443 v1.30.2 docker true true} ...
	I0624 05:26:11.840168    6684 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.211.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:26:11.850059    6684 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 05:26:11.885753    6684 command_runner.go:130] > cgroupfs
	I0624 05:26:11.886809    6684 cni.go:84] Creating CNI manager for ""
	I0624 05:26:11.886891    6684 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 05:26:11.886891    6684 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 05:26:11.887018    6684 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.211.219 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-876600 NodeName:multinode-876600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.211.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.211.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 05:26:11.887018    6684 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.211.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-876600"
	  kubeletExtraArgs:
	    node-ip: 172.31.211.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.211.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 05:26:11.901716    6684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:26:11.919638    6684 command_runner.go:130] > kubeadm
	I0624 05:26:11.919764    6684 command_runner.go:130] > kubectl
	I0624 05:26:11.919764    6684 command_runner.go:130] > kubelet
	I0624 05:26:11.919853    6684 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 05:26:11.933027    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 05:26:11.950197    6684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0624 05:26:11.981024    6684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:26:12.012525    6684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0624 05:26:12.058426    6684 ssh_runner.go:195] Run: grep 172.31.211.219	control-plane.minikube.internal$ /etc/hosts
	I0624 05:26:12.064642    6684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.211.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:26:12.099334    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:26:12.321380    6684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:26:12.356383    6684 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.211.219
	I0624 05:26:12.356499    6684 certs.go:194] generating shared ca certs ...
	I0624 05:26:12.356499    6684 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:12.357345    6684 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:26:12.357762    6684 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:26:12.357944    6684 certs.go:256] generating profile certs ...
	I0624 05:26:12.358463    6684 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.key
	I0624 05:26:12.358463    6684 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.crt with IP's: []
	I0624 05:26:12.999337    6684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.crt ...
	I0624 05:26:12.999337    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.crt: {Name:mk2d07c2012558e0de50238c322bd38a7671cf60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.001182    6684 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.key ...
	I0624 05:26:13.001182    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.key: {Name:mk5768d21c8aed53fa44b142cc019d05582db84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.002927    6684 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.c69894a0
	I0624 05:26:13.003093    6684 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.c69894a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.211.219]
	I0624 05:26:13.129350    6684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.c69894a0 ...
	I0624 05:26:13.129350    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.c69894a0: {Name:mk33c05de7ca54c2a285d8409c50cb432f422686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.131304    6684 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.c69894a0 ...
	I0624 05:26:13.131304    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.c69894a0: {Name:mk051cd9ff7a5bb56d75bc504989d58fccc25aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.132443    6684 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.c69894a0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt
	I0624 05:26:13.143380    6684 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.c69894a0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key
	I0624 05:26:13.145026    6684 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key
	I0624 05:26:13.145851    6684 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt with IP's: []
	I0624 05:26:13.392438    6684 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt ...
	I0624 05:26:13.392438    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt: {Name:mke011433b76a401009be6ae549e316a1ea9979b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.393445    6684 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key ...
	I0624 05:26:13.393445    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key: {Name:mk57f86c5698ea1975fda65a54d38cdbde19329c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:13.394442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:26:13.394442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:26:13.394442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:26:13.395442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:26:13.395442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 05:26:13.395442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 05:26:13.395442    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 05:26:13.404444    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 05:26:13.405441    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:26:13.405441    6684 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:26:13.405441    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:26:13.405441    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:26:13.406434    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:26:13.406434    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:26:13.407223    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:26:13.407554    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:26:13.407725    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:26:13.407965    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:26:13.409174    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:26:13.457592    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:26:13.514081    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:26:13.560036    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:26:13.604237    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0624 05:26:13.649534    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 05:26:13.695049    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 05:26:13.741819    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0624 05:26:13.786166    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:26:13.829614    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:26:13.875984    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:26:13.923143    6684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 05:26:13.967029    6684 ssh_runner.go:195] Run: openssl version
	I0624 05:26:13.976270    6684 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:26:13.988558    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:26:14.019508    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:26:14.026270    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:26:14.026270    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:26:14.039418    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:26:14.049790    6684 command_runner.go:130] > b5213941
	I0624 05:26:14.062548    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:26:14.095202    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:26:14.133670    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:26:14.140960    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:26:14.141051    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:26:14.153643    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:26:14.162278    6684 command_runner.go:130] > 51391683
	I0624 05:26:14.174765    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:26:14.208198    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:26:14.244345    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:26:14.250453    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:26:14.251276    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:26:14.264237    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:26:14.272423    6684 command_runner.go:130] > 3ec20f2e
	I0624 05:26:14.284771    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:26:14.317075    6684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:26:14.323510    6684 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:26:14.323698    6684 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:26:14.323760    6684 kubeadm.go:391] StartCluster: {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:26:14.334789    6684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 05:26:14.369544    6684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 05:26:14.386671    6684 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0624 05:26:14.387624    6684 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0624 05:26:14.387624    6684 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0624 05:26:14.401578    6684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 05:26:14.433222    6684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 05:26:14.449863    6684 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0624 05:26:14.449863    6684 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0624 05:26:14.449863    6684 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0624 05:26:14.449863    6684 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:26:14.451266    6684 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:26:14.451301    6684 kubeadm.go:156] found existing configuration files:
	
	I0624 05:26:14.463686    6684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 05:26:14.479828    6684 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:26:14.480547    6684 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:26:14.495653    6684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 05:26:14.524017    6684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 05:26:14.540987    6684 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:26:14.541603    6684 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:26:14.556525    6684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 05:26:14.588653    6684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 05:26:14.606867    6684 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:26:14.606867    6684 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:26:14.621892    6684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 05:26:14.653150    6684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 05:26:14.669816    6684 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:26:14.669816    6684 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:26:14.681748    6684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 05:26:14.701435    6684 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0624 05:26:15.143263    6684 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 05:26:15.143334    6684 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 05:26:27.838576    6684 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0624 05:26:27.838576    6684 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0624 05:26:27.838654    6684 command_runner.go:130] > [preflight] Running pre-flight checks
	I0624 05:26:27.838654    6684 kubeadm.go:309] [preflight] Running pre-flight checks
	I0624 05:26:27.838949    6684 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 05:26:27.839015    6684 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0624 05:26:27.839089    6684 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 05:26:27.839089    6684 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0624 05:26:27.839089    6684 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 05:26:27.839089    6684 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0624 05:26:27.839628    6684 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 05:26:27.839729    6684 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 05:26:27.842313    6684 out.go:204]   - Generating certificates and keys ...
	I0624 05:26:27.842643    6684 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0624 05:26:27.842643    6684 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0624 05:26:27.842889    6684 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0624 05:26:27.842889    6684 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0624 05:26:27.843015    6684 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0624 05:26:27.843015    6684 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0624 05:26:27.843015    6684 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0624 05:26:27.843015    6684 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0624 05:26:27.843015    6684 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0624 05:26:27.843015    6684 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0624 05:26:27.843559    6684 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0624 05:26:27.843559    6684 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0624 05:26:27.843829    6684 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0624 05:26:27.843829    6684 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0624 05:26:27.844335    6684 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-876600] and IPs [172.31.211.219 127.0.0.1 ::1]
	I0624 05:26:27.844335    6684 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-876600] and IPs [172.31.211.219 127.0.0.1 ::1]
	I0624 05:26:27.844552    6684 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0624 05:26:27.844552    6684 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0624 05:26:27.845028    6684 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-876600] and IPs [172.31.211.219 127.0.0.1 ::1]
	I0624 05:26:27.845028    6684 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-876600] and IPs [172.31.211.219 127.0.0.1 ::1]
	I0624 05:26:27.845290    6684 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0624 05:26:27.845290    6684 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0624 05:26:27.845483    6684 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0624 05:26:27.845483    6684 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0624 05:26:27.845604    6684 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0624 05:26:27.845604    6684 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0624 05:26:27.845604    6684 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 05:26:27.845604    6684 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 05:26:27.845604    6684 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 05:26:27.845604    6684 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 05:26:27.845604    6684 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 05:26:27.845604    6684 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 05:26:27.846162    6684 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 05:26:27.846162    6684 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 05:26:27.846368    6684 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 05:26:27.846368    6684 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 05:26:27.846368    6684 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 05:26:27.846368    6684 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 05:26:27.846368    6684 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 05:26:27.846368    6684 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 05:26:27.847050    6684 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 05:26:27.847050    6684 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 05:26:27.850432    6684 out.go:204]   - Booting up control plane ...
	I0624 05:26:27.850432    6684 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 05:26:27.850777    6684 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 05:26:27.851033    6684 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 05:26:27.851033    6684 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 05:26:27.851258    6684 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 05:26:27.851258    6684 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 05:26:27.851551    6684 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 05:26:27.851614    6684 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 05:26:27.851731    6684 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 05:26:27.851731    6684 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 05:26:27.851731    6684 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0624 05:26:27.851731    6684 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0624 05:26:27.852296    6684 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0624 05:26:27.852296    6684 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0624 05:26:27.852481    6684 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 05:26:27.852556    6684 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 05:26:27.852812    6684 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.428373ms
	I0624 05:26:27.852812    6684 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.428373ms
	I0624 05:26:27.852812    6684 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0624 05:26:27.852812    6684 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0624 05:26:27.852812    6684 kubeadm.go:309] [api-check] The API server is healthy after 7.002627856s
	I0624 05:26:27.852812    6684 command_runner.go:130] > [api-check] The API server is healthy after 7.002627856s
	I0624 05:26:27.852812    6684 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 05:26:27.852812    6684 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0624 05:26:27.853414    6684 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 05:26:27.853414    6684 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0624 05:26:27.853592    6684 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0624 05:26:27.853666    6684 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0624 05:26:27.854100    6684 kubeadm.go:309] [mark-control-plane] Marking the node multinode-876600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 05:26:27.854100    6684 command_runner.go:130] > [mark-control-plane] Marking the node multinode-876600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0624 05:26:27.854100    6684 command_runner.go:130] > [bootstrap-token] Using token: 4ta6p3.ti6soyzldlqf4k4e
	I0624 05:26:27.854100    6684 kubeadm.go:309] [bootstrap-token] Using token: 4ta6p3.ti6soyzldlqf4k4e
	I0624 05:26:27.856873    6684 out.go:204]   - Configuring RBAC rules ...
	I0624 05:26:27.857114    6684 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 05:26:27.857247    6684 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0624 05:26:27.857316    6684 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 05:26:27.857316    6684 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0624 05:26:27.857316    6684 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 05:26:27.857316    6684 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0624 05:26:27.858041    6684 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 05:26:27.858083    6684 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0624 05:26:27.858083    6684 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 05:26:27.858083    6684 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0624 05:26:27.858083    6684 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 05:26:27.858083    6684 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0624 05:26:27.858083    6684 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 05:26:27.858083    6684 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0624 05:26:27.858962    6684 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0624 05:26:27.858962    6684 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0624 05:26:27.858962    6684 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0624 05:26:27.859134    6684 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0624 05:26:27.859178    6684 kubeadm.go:309] 
	I0624 05:26:27.859322    6684 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0624 05:26:27.859322    6684 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0624 05:26:27.859373    6684 kubeadm.go:309] 
	I0624 05:26:27.859664    6684 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0624 05:26:27.859735    6684 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0624 05:26:27.859803    6684 kubeadm.go:309] 
	I0624 05:26:27.859891    6684 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0624 05:26:27.859891    6684 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0624 05:26:27.860066    6684 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 05:26:27.860133    6684 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0624 05:26:27.860133    6684 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 05:26:27.860133    6684 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0624 05:26:27.860133    6684 kubeadm.go:309] 
	I0624 05:26:27.860133    6684 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0624 05:26:27.860133    6684 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0624 05:26:27.860133    6684 kubeadm.go:309] 
	I0624 05:26:27.860133    6684 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 05:26:27.860133    6684 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0624 05:26:27.860133    6684 kubeadm.go:309] 
	I0624 05:26:27.860734    6684 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0624 05:26:27.860734    6684 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0624 05:26:27.860851    6684 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 05:26:27.860851    6684 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0624 05:26:27.860851    6684 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 05:26:27.860851    6684 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0624 05:26:27.860851    6684 kubeadm.go:309] 
	I0624 05:26:27.860851    6684 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0624 05:26:27.861393    6684 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0624 05:26:27.861497    6684 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0624 05:26:27.861497    6684 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0624 05:26:27.861497    6684 kubeadm.go:309] 
	I0624 05:26:27.861497    6684 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ta6p3.ti6soyzldlqf4k4e \
	I0624 05:26:27.861497    6684 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4ta6p3.ti6soyzldlqf4k4e \
	I0624 05:26:27.861497    6684 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 \
	I0624 05:26:27.862089    6684 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 \
	I0624 05:26:27.862135    6684 kubeadm.go:309] 	--control-plane 
	I0624 05:26:27.862199    6684 command_runner.go:130] > 	--control-plane 
	I0624 05:26:27.862199    6684 kubeadm.go:309] 
	I0624 05:26:27.862199    6684 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0624 05:26:27.862199    6684 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0624 05:26:27.862199    6684 kubeadm.go:309] 
	I0624 05:26:27.862199    6684 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4ta6p3.ti6soyzldlqf4k4e \
	I0624 05:26:27.862199    6684 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ta6p3.ti6soyzldlqf4k4e \
	I0624 05:26:27.862840    6684 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 05:26:27.862932    6684 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 05:26:27.862932    6684 cni.go:84] Creating CNI manager for ""
	I0624 05:26:27.862932    6684 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0624 05:26:27.865597    6684 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0624 05:26:27.883474    6684 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0624 05:26:27.891697    6684 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0624 05:26:27.891958    6684 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0624 05:26:27.891958    6684 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0624 05:26:27.891958    6684 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0624 05:26:27.891958    6684 command_runner.go:130] > Access: 2024-06-24 12:24:31.213963800 +0000
	I0624 05:26:27.891958    6684 command_runner.go:130] > Modify: 2024-06-21 04:42:41.000000000 +0000
	I0624 05:26:27.891958    6684 command_runner.go:130] > Change: 2024-06-24 05:24:22.725000000 +0000
	I0624 05:26:27.891958    6684 command_runner.go:130] >  Birth: -
	I0624 05:26:27.892062    6684 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0624 05:26:27.892123    6684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0624 05:26:27.942044    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0624 05:26:28.638757    6684 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0624 05:26:28.638757    6684 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0624 05:26:28.638877    6684 command_runner.go:130] > serviceaccount/kindnet created
	I0624 05:26:28.638877    6684 command_runner.go:130] > daemonset.apps/kindnet created
	I0624 05:26:28.638949    6684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 05:26:28.655093    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-876600 minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=multinode-876600 minikube.k8s.io/primary=true
	I0624 05:26:28.655093    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:28.666145    6684 command_runner.go:130] > -16
	I0624 05:26:28.667173    6684 ops.go:34] apiserver oom_adj: -16
	I0624 05:26:28.819515    6684 command_runner.go:130] > node/multinode-876600 labeled
	I0624 05:26:28.821059    6684 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0624 05:26:28.836038    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:28.956321    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:29.333669    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:29.469033    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:29.842186    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:29.946504    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:30.342765    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:30.451779    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:30.843851    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:30.961913    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:31.345839    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:31.453179    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:31.850317    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:31.957688    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:32.335646    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:32.447557    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:32.836429    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:32.938995    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:33.348477    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:33.450532    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:33.839532    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:33.957374    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:34.342094    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:34.452238    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:34.850752    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:34.953631    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:35.349442    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:35.450032    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:35.841596    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:35.945459    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:36.344784    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:36.442256    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:36.840813    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:36.937676    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:37.345375    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:37.468391    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:37.846296    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:37.953784    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:38.348500    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:38.464550    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:38.838427    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:38.948550    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:39.340126    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:39.455634    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:39.843642    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:39.958445    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:40.343710    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:40.473874    6684 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0624 05:26:40.839162    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0624 05:26:40.968182    6684 command_runner.go:130] > NAME      SECRETS   AGE
	I0624 05:26:40.968433    6684 command_runner.go:130] > default   0         0s
	I0624 05:26:40.968433    6684 kubeadm.go:1107] duration metric: took 12.3294356s to wait for elevateKubeSystemPrivileges
	W0624 05:26:40.968515    6684 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0624 05:26:40.968635    6684 kubeadm.go:393] duration metric: took 26.644771s to StartCluster
	I0624 05:26:40.968741    6684 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:40.968995    6684 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:26:40.971151    6684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:26:40.972692    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0624 05:26:40.972692    6684 start.go:234] Will wait 6m0s for node &{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 05:26:40.972856    6684 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 05:26:40.973004    6684 addons.go:69] Setting storage-provisioner=true in profile "multinode-876600"
	I0624 05:26:40.973004    6684 addons.go:234] Setting addon storage-provisioner=true in "multinode-876600"
	I0624 05:26:40.973160    6684 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:26:40.973261    6684 addons.go:69] Setting default-storageclass=true in profile "multinode-876600"
	I0624 05:26:40.973261    6684 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-876600"
	I0624 05:26:40.973261    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:26:40.973957    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:26:40.974600    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:26:40.975406    6684 out.go:177] * Verifying Kubernetes components...
	I0624 05:26:40.992289    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:26:41.303937    6684 command_runner.go:130] > apiVersion: v1
	I0624 05:26:41.303937    6684 command_runner.go:130] > data:
	I0624 05:26:41.303937    6684 command_runner.go:130] >   Corefile: |
	I0624 05:26:41.303937    6684 command_runner.go:130] >     .:53 {
	I0624 05:26:41.303937    6684 command_runner.go:130] >         errors
	I0624 05:26:41.303937    6684 command_runner.go:130] >         health {
	I0624 05:26:41.303937    6684 command_runner.go:130] >            lameduck 5s
	I0624 05:26:41.303937    6684 command_runner.go:130] >         }
	I0624 05:26:41.303937    6684 command_runner.go:130] >         ready
	I0624 05:26:41.303937    6684 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0624 05:26:41.303937    6684 command_runner.go:130] >            pods insecure
	I0624 05:26:41.303937    6684 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0624 05:26:41.303937    6684 command_runner.go:130] >            ttl 30
	I0624 05:26:41.303937    6684 command_runner.go:130] >         }
	I0624 05:26:41.303937    6684 command_runner.go:130] >         prometheus :9153
	I0624 05:26:41.303937    6684 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0624 05:26:41.303937    6684 command_runner.go:130] >            max_concurrent 1000
	I0624 05:26:41.303937    6684 command_runner.go:130] >         }
	I0624 05:26:41.303937    6684 command_runner.go:130] >         cache 30
	I0624 05:26:41.303937    6684 command_runner.go:130] >         loop
	I0624 05:26:41.303937    6684 command_runner.go:130] >         reload
	I0624 05:26:41.303937    6684 command_runner.go:130] >         loadbalance
	I0624 05:26:41.303937    6684 command_runner.go:130] >     }
	I0624 05:26:41.303937    6684 command_runner.go:130] > kind: ConfigMap
	I0624 05:26:41.303937    6684 command_runner.go:130] > metadata:
	I0624 05:26:41.303937    6684 command_runner.go:130] >   creationTimestamp: "2024-06-24T12:26:27Z"
	I0624 05:26:41.303937    6684 command_runner.go:130] >   name: coredns
	I0624 05:26:41.303937    6684 command_runner.go:130] >   namespace: kube-system
	I0624 05:26:41.303937    6684 command_runner.go:130] >   resourceVersion: "231"
	I0624 05:26:41.303937    6684 command_runner.go:130] >   uid: 170591c4-eada-47ac-ab3c-276bd8c08a40
	I0624 05:26:41.307722    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.31.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0624 05:26:41.431412    6684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:26:41.793428    6684 command_runner.go:130] > configmap/coredns replaced
	I0624 05:26:41.793549    6684 start.go:946] {"host.minikube.internal": 172.31.208.1} host record injected into CoreDNS's ConfigMap
	I0624 05:26:41.794793    6684 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:26:41.794880    6684 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:26:41.794963    6684 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.211.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:26:41.794963    6684 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.211.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:26:41.797305    6684 cert_rotation.go:137] Starting client certificate rotation controller
	I0624 05:26:41.797663    6684 node_ready.go:35] waiting up to 6m0s for node "multinode-876600" to be "Ready" ...
	I0624 05:26:41.797663    6684 round_trippers.go:463] GET https://172.31.211.219:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0624 05:26:41.797663    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:41.797663    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:41.797663    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:41.797663    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:41.797663    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:41.797663    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:41.797663    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:41.815111    6684 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0624 05:26:41.816201    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:41.816201    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:41 GMT
	I0624 05:26:41.816201    6684 round_trippers.go:580]     Audit-Id: 26c70ef2-dbc1-4b8f-8e71-e962137b08ba
	I0624 05:26:41.816201    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:41.816201    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:41.816201    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:41.816201    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:41.816653    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:41.817977    6684 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0624 05:26:41.818088    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:41.818141    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:41 GMT
	I0624 05:26:41.818141    6684 round_trippers.go:580]     Audit-Id: fbc61461-04b7-4ed5-8a69-703f01378ace
	I0624 05:26:41.818141    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:41.818141    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:41.818141    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:41.818141    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:41.818251    6684 round_trippers.go:580]     Content-Length: 291
	I0624 05:26:41.818251    6684 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4f0cda9a-6558-4da5-a6f3-65714bee0e77","resourceVersion":"351","creationTimestamp":"2024-06-24T12:26:27Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0624 05:26:41.818946    6684 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4f0cda9a-6558-4da5-a6f3-65714bee0e77","resourceVersion":"351","creationTimestamp":"2024-06-24T12:26:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0624 05:26:41.819087    6684 round_trippers.go:463] PUT https://172.31.211.219:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0624 05:26:41.819122    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:41.819122    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:41.819122    6684 round_trippers.go:473]     Content-Type: application/json
	I0624 05:26:41.819122    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:41.835293    6684 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0624 05:26:41.835994    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:41.835994    6684 round_trippers.go:580]     Audit-Id: dc0d8e00-f1ea-451a-907e-417f4209916e
	I0624 05:26:41.835994    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:41.835994    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:41.835994    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:41.835994    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:41.835994    6684 round_trippers.go:580]     Content-Length: 291
	I0624 05:26:41.835994    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:41 GMT
	I0624 05:26:41.836116    6684 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4f0cda9a-6558-4da5-a6f3-65714bee0e77","resourceVersion":"363","creationTimestamp":"2024-06-24T12:26:27Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0624 05:26:42.302862    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:42.305672    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:42.305672    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:42.302862    6684 round_trippers.go:463] GET https://172.31.211.219:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0624 05:26:42.305672    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:42.305672    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:42.305672    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:42.305672    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:42.309663    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:42.309663    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:42.309663    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:42.309663    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Content-Length: 291
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:42.309663    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:42.309663    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:42 GMT
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Audit-Id: 51241fa8-106b-4246-a1df-36662637d34a
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:42 GMT
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Audit-Id: ab99b833-ec19-488e-8e91-427b329b4648
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:42.309663    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:42.309663    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:42.309663    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:42.309663    6684 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4f0cda9a-6558-4da5-a6f3-65714bee0e77","resourceVersion":"373","creationTimestamp":"2024-06-24T12:26:27Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0624 05:26:42.309663    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:42.309663    6684 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-876600" context rescaled to 1 replicas
	I0624 05:26:42.812171    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:42.812251    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:42.812251    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:42.812251    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:42.814795    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:42.815662    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:42.815662    6684 round_trippers.go:580]     Audit-Id: 7bad949d-7007-4896-806b-94af7c48f783
	I0624 05:26:42.815662    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:42.815662    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:42.815662    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:42.815662    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:42.815745    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:42 GMT
	I0624 05:26:42.815993    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:43.306117    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:43.306196    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:43.306196    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:43.306196    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:43.311634    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:26:43.311634    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:43.311634    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:43.311634    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:43.311634    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:43 GMT
	I0624 05:26:43.311634    6684 round_trippers.go:580]     Audit-Id: 81a9941b-5e74-4155-89a0-05f2430cb3c1
	I0624 05:26:43.311634    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:43.311634    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:43.311634    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:43.368382    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:26:43.368624    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:43.368382    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:26:43.368719    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:43.370265    6684 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:26:43.370969    6684 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.211.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:26:43.371970    6684 addons.go:234] Setting addon default-storageclass=true in "multinode-876600"
	I0624 05:26:43.372102    6684 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:26:43.373071    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:26:43.375482    6684 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:26:43.377974    6684 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 05:26:43.377974    6684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0624 05:26:43.377974    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:26:43.812023    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:43.812023    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:43.812023    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:43.812023    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:43.909629    6684 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I0624 05:26:43.909911    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:43.909911    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:43.909996    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:43.909996    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:43 GMT
	I0624 05:26:43.909996    6684 round_trippers.go:580]     Audit-Id: af4df405-8ed6-418b-8b3e-e92b10d11592
	I0624 05:26:43.909996    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:43.909996    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:43.910392    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:43.911032    6684 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:26:44.304404    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:44.304682    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:44.304682    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:44.304682    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:44.329277    6684 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0624 05:26:44.329642    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:44.329642    6684 round_trippers.go:580]     Audit-Id: 15a2c54c-93b1-4b7d-855c-2689259b72bc
	I0624 05:26:44.329642    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:44.329642    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:44.329642    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:44.329642    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:44.329757    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:44 GMT
	I0624 05:26:44.379665    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:44.812579    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:44.812579    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:44.812579    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:44.812880    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:44.816352    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:44.816352    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:44.816352    6684 round_trippers.go:580]     Audit-Id: a330e8c1-eed1-48fa-9a68-8b99e6e666e5
	I0624 05:26:44.816352    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:44.816352    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:44.816352    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:44.816352    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:44.816352    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:44 GMT
	I0624 05:26:44.817346    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:45.308358    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:45.308358    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:45.308358    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:45.308358    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:45.312369    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:45.313043    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:45.313043    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:45.313174    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:45.313174    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:45 GMT
	I0624 05:26:45.313174    6684 round_trippers.go:580]     Audit-Id: 97dda607-4db3-4ad9-a620-63eb9b02dc78
	I0624 05:26:45.313174    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:45.313174    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:45.314316    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:45.746021    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:26:45.746114    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:45.746240    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:26:45.753191    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:26:45.753191    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:45.753191    6684 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0624 05:26:45.753191    6684 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0624 05:26:45.753191    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:26:45.802317    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:45.802317    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:45.802317    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:45.802317    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:45.806152    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:45.806152    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:45.806152    6684 round_trippers.go:580]     Audit-Id: c54c271e-b5e9-4d3a-aab7-b54893920cae
	I0624 05:26:45.806152    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:45.806152    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:45.806816    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:45.806816    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:45.806816    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:45 GMT
	I0624 05:26:45.807287    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:46.306194    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:46.306194    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:46.306194    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:46.306194    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:46.311296    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:46.311296    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:46.311296    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:46.311434    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:46 GMT
	I0624 05:26:46.311488    6684 round_trippers.go:580]     Audit-Id: a08d2c51-0b15-4c84-85fb-3fa18d604af3
	I0624 05:26:46.311488    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:46.311488    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:46.311488    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:46.312189    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:46.313195    6684 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:26:46.811886    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:46.811886    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:46.811886    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:46.811886    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:46.818176    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:26:46.818176    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:46.818176    6684 round_trippers.go:580]     Audit-Id: 2329dad3-65c0-415b-a6ed-0bf1529160d7
	I0624 05:26:46.818176    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:46.818605    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:46.818605    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:46.818605    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:46.818605    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:46 GMT
	I0624 05:26:46.819170    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:47.302881    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:47.303122    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:47.303122    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:47.303122    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:47.307633    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:47.308133    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:47.308133    6684 round_trippers.go:580]     Audit-Id: af9bf13f-c2c3-4a6f-b7c5-2cf15981aaea
	I0624 05:26:47.308133    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:47.308133    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:47.308133    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:47.308133    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:47.308133    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:47 GMT
	I0624 05:26:47.308433    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:47.811421    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:47.811421    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:47.811554    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:47.811554    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:47.815764    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:47.815764    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:47.815764    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:47.815764    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:47.815764    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:47 GMT
	I0624 05:26:47.815764    6684 round_trippers.go:580]     Audit-Id: 9d3de10a-c2ae-4f85-aa9d-b627b421b71d
	I0624 05:26:47.815764    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:47.815764    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:47.815764    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:48.055244    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:26:48.055313    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:48.055380    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:26:48.305016    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:48.305016    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:48.305016    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:48.305016    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:48.309041    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:48.309318    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:48.309318    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:48.309318    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:48.309318    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:48.309411    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:48 GMT
	I0624 05:26:48.309411    6684 round_trippers.go:580]     Audit-Id: b5f07fbe-0f91-4d64-b25b-caae5ba3301a
	I0624 05:26:48.309411    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:48.310216    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:48.535569    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:26:48.536183    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:48.536348    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:26:48.699705    6684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0624 05:26:48.807768    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:48.807768    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:48.807768    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:48.807768    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:48.812722    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:48.812722    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:48.812722    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:48.812722    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:48.812722    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:48 GMT
	I0624 05:26:48.812722    6684 round_trippers.go:580]     Audit-Id: ddd195fb-cef7-4535-9098-834cb10b0b60
	I0624 05:26:48.812722    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:48.812722    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:48.813373    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:48.813751    6684 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:26:49.258651    6684 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0624 05:26:49.258731    6684 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0624 05:26:49.258731    6684 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0624 05:26:49.258731    6684 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0624 05:26:49.258856    6684 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0624 05:26:49.258900    6684 command_runner.go:130] > pod/storage-provisioner created
	I0624 05:26:49.303502    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:49.303743    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:49.303743    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:49.303743    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:49.313289    6684 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 05:26:49.313289    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:49.313289    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:49 GMT
	I0624 05:26:49.313289    6684 round_trippers.go:580]     Audit-Id: 6775ccdc-e4b7-43cb-bdbd-56e25869ba6b
	I0624 05:26:49.313289    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:49.313289    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:49.313289    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:49.313289    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:49.313289    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:49.811544    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:49.811544    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:49.811544    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:49.811544    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:49.814874    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:49.814874    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:49.814972    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:49.814972    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:49.814972    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:49.814972    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:49 GMT
	I0624 05:26:49.814972    6684 round_trippers.go:580]     Audit-Id: 0124431d-7898-4263-b7ff-5004f64fe037
	I0624 05:26:49.814972    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:49.815375    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:50.303164    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:50.303164    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:50.303164    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:50.303164    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:50.306358    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:50.307420    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:50.307420    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:50.307420    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:50.307471    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:50.307471    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:50.307471    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:50 GMT
	I0624 05:26:50.307471    6684 round_trippers.go:580]     Audit-Id: cf322066-c6fd-49c2-bdf7-04de43af0a33
	I0624 05:26:50.307534    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:50.689244    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:26:50.689244    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:50.690126    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:26:50.805885    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:50.805885    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:50.806025    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:50.806025    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:50.809606    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:50.809666    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:50.809717    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:50.809717    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:50 GMT
	I0624 05:26:50.809717    6684 round_trippers.go:580]     Audit-Id: 0a57aae1-3816-4539-8af7-b698b9f36638
	I0624 05:26:50.809751    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:50.809751    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:50.809751    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:50.809751    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:50.825567    6684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0624 05:26:50.972501    6684 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0624 05:26:50.974190    6684 round_trippers.go:463] GET https://172.31.211.219:8443/apis/storage.k8s.io/v1/storageclasses
	I0624 05:26:50.974274    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:50.974320    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:50.974320    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:50.977698    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:50.977698    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:50.977698    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:50.977698    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:50.977698    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:50.977698    6684 round_trippers.go:580]     Content-Length: 1273
	I0624 05:26:50.977698    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:50 GMT
	I0624 05:26:50.977698    6684 round_trippers.go:580]     Audit-Id: dc9c119e-d2df-4e3e-a730-2422f952144f
	I0624 05:26:50.977698    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:50.977698    6684 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"standard","uid":"414ea6ef-0f19-4bf3-8e8e-a9baa9b9d5c7","resourceVersion":"401","creationTimestamp":"2024-06-24T12:26:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-24T12:26:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0624 05:26:50.978742    6684 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"414ea6ef-0f19-4bf3-8e8e-a9baa9b9d5c7","resourceVersion":"401","creationTimestamp":"2024-06-24T12:26:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-24T12:26:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0624 05:26:50.978742    6684 round_trippers.go:463] PUT https://172.31.211.219:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0624 05:26:50.978742    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:50.978742    6684 round_trippers.go:473]     Content-Type: application/json
	I0624 05:26:50.978742    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:50.978742    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:50.982922    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:50.983050    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:50.983050    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:50.983050    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:50.983050    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:50.983095    6684 round_trippers.go:580]     Content-Length: 1220
	I0624 05:26:50.983095    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:50 GMT
	I0624 05:26:50.983095    6684 round_trippers.go:580]     Audit-Id: fb7051ce-255f-41e4-957c-c881933c37d2
	I0624 05:26:50.983095    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:50.983156    6684 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"414ea6ef-0f19-4bf3-8e8e-a9baa9b9d5c7","resourceVersion":"401","creationTimestamp":"2024-06-24T12:26:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-24T12:26:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0624 05:26:50.991061    6684 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0624 05:26:50.993359    6684 addons.go:510] duration metric: took 10.0204638s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0624 05:26:51.305603    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:51.305603    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:51.305603    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:51.305603    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:51.309063    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:51.309902    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:51.309902    6684 round_trippers.go:580]     Audit-Id: f6f70c4c-d326-422b-9800-3a4ce3f22035
	I0624 05:26:51.309902    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:51.309902    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:51.309902    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:51.309902    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:51.309902    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:51 GMT
	I0624 05:26:51.310160    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:51.310886    6684 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:26:51.805070    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:51.805070    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:51.805209    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:51.805209    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:51.809497    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:51.809497    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:51.809969    6684 round_trippers.go:580]     Audit-Id: ab3acf00-4ea7-49a8-b65f-6b9ef83d27f7
	I0624 05:26:51.809969    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:51.809969    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:51.809969    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:51.809969    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:51.809969    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:51 GMT
	I0624 05:26:51.810384    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"305","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0624 05:26:52.313917    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:52.314033    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.314033    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.314033    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.318253    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:52.318550    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.318550    6684 round_trippers.go:580]     Audit-Id: 4f314abe-79c0-4f2e-80a5-77be37d78e74
	I0624 05:26:52.318624    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.318624    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.318624    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.318624    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.318624    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.319185    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:52.320921    6684 node_ready.go:49] node "multinode-876600" has status "Ready":"True"
	I0624 05:26:52.321573    6684 node_ready.go:38] duration metric: took 10.523869s for node "multinode-876600" to be "Ready" ...
	I0624 05:26:52.321573    6684 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:26:52.321573    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:26:52.321573    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.321573    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.321573    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.329907    6684 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 05:26:52.330231    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.330231    6684 round_trippers.go:580]     Audit-Id: becd6a77-e5e5-415a-8cc2-2f07458c408b
	I0624 05:26:52.330231    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.330231    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.330231    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.330231    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.330231    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.332951    6684 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"409","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0624 05:26:52.337956    6684 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:52.337956    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:26:52.337956    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.337956    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.337956    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.341977    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:52.341977    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.342923    6684 round_trippers.go:580]     Audit-Id: 9fd516bc-5986-4338-968d-c1b67c86c369
	I0624 05:26:52.342959    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.342959    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.342959    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.342959    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.342959    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.343180    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"409","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0624 05:26:52.343338    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:52.343338    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.343338    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.343338    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.346132    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:52.346132    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.346132    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.346132    6684 round_trippers.go:580]     Audit-Id: f8cc6586-c532-4220-abad-4f727679e3b9
	I0624 05:26:52.346132    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.346132    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.346132    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.346132    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.347787    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:52.852221    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:26:52.852221    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.852221    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.852221    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.855824    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:52.855824    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.855824    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.856318    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.856318    6684 round_trippers.go:580]     Audit-Id: 23b665c0-fb2e-43dd-9799-7fbf2252cccb
	I0624 05:26:52.856318    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.856318    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.856318    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.856318    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"409","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0624 05:26:52.857463    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:52.857533    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:52.857533    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:52.857533    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:52.859919    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:52.859919    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:52.860116    6684 round_trippers.go:580]     Audit-Id: 10dc0af6-cdfe-44cb-81cc-10bdde4e3ad5
	I0624 05:26:52.860116    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:52.860116    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:52.860116    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:52.860116    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:52.860116    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:52 GMT
	I0624 05:26:52.860212    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:53.345957    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:26:53.346039    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:53.346039    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:53.346039    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:53.349491    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:53.349869    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:53.349869    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:53.349869    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:53 GMT
	I0624 05:26:53.349869    6684 round_trippers.go:580]     Audit-Id: 0a845ba9-0ad9-438c-8952-c359ae2bc3d4
	I0624 05:26:53.349975    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:53.349975    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:53.349975    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:53.350156    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"409","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0624 05:26:53.351002    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:53.351002    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:53.351084    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:53.351084    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:53.353464    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:53.353464    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:53.353464    6684 round_trippers.go:580]     Audit-Id: c62a582e-a6f1-4163-add9-a8da3e13ad3a
	I0624 05:26:53.353464    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:53.353464    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:53.353464    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:53.353464    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:53.353464    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:53 GMT
	I0624 05:26:53.354076    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:53.848501    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:26:53.848758    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:53.848758    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:53.848758    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:53.851522    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:53.851522    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:53.851522    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:53.851522    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:53.852541    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:53.852563    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:53 GMT
	I0624 05:26:53.852563    6684 round_trippers.go:580]     Audit-Id: bfabeeb2-2b1b-454d-b453-5dcef54f4953
	I0624 05:26:53.852563    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:53.852705    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"409","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0624 05:26:53.853731    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:53.853789    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:53.853789    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:53.853789    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:53.856253    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:53.856253    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:53.856454    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:53.856454    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:53.856454    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:53.856454    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:53.856454    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:53 GMT
	I0624 05:26:53.856454    6684 round_trippers.go:580]     Audit-Id: 02d8ced8-1f0c-4985-8c23-cd4795bf0987
	I0624 05:26:53.856809    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.348923    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:26:54.349268    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.349268    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.349268    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.352641    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:54.352720    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.352720    6684 round_trippers.go:580]     Audit-Id: d3f6d86c-6228-4597-a2e0-d02ef8abbd79
	I0624 05:26:54.352720    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.352720    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.352720    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.352720    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.352720    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.352980    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"420","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0624 05:26:54.353775    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.353872    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.353872    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.353872    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.356672    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:54.356672    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.356672    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.356672    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.357490    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.357490    6684 round_trippers.go:580]     Audit-Id: d150c592-0161-4a7b-8c2d-7433a8749acb
	I0624 05:26:54.357490    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.357490    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.357703    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.358540    6684 pod_ready.go:92] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.358605    6684 pod_ready.go:81] duration metric: took 2.0206412s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.358605    6684 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.358681    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:26:54.358681    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.358681    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.358681    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.364829    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:26:54.365003    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.365003    6684 round_trippers.go:580]     Audit-Id: 143a0538-dfcb-4486-9f18-734985cc9959
	I0624 05:26:54.365003    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.365003    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.365003    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.365003    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.365003    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.365003    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"4906666c-eed2-4f7c-a011-5a9b589fdcdc","resourceVersion":"386","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.211.219:2379","kubernetes.io/config.hash":"1e708d5cd73627b4d4daa56de34a8c4e","kubernetes.io/config.mirror":"1e708d5cd73627b4d4daa56de34a8c4e","kubernetes.io/config.seen":"2024-06-24T12:26:27.293357655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0624 05:26:54.365691    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.365691    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.365691    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.365691    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.368920    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:54.369031    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.369031    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.369031    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.369031    6684 round_trippers.go:580]     Audit-Id: 7dccdf7f-8608-499a-9d59-dfd092fd42b0
	I0624 05:26:54.369079    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.369079    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.369079    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.369993    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.370517    6684 pod_ready.go:92] pod "etcd-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.370517    6684 pod_ready.go:81] duration metric: took 11.9111ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.370646    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.370719    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:26:54.370787    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.370787    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.370787    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.377999    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:26:54.377999    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.377999    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.377999    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.377999    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.377999    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.377999    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.377999    6684 round_trippers.go:580]     Audit-Id: ed5fd7c6-5762-4802-b669-ff79b2234174
	I0624 05:26:54.377999    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a7f191-9dd7-4dcd-8e9e-d05deeac2349","resourceVersion":"384","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.211.219:8443","kubernetes.io/config.hash":"f659c666f2215840bd65758467c8d95f","kubernetes.io/config.mirror":"f659c666f2215840bd65758467c8d95f","kubernetes.io/config.seen":"2024-06-24T12:26:27.293359155Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0624 05:26:54.379006    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.379006    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.379006    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.379006    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.382010    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:54.382010    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.382010    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.382010    6684 round_trippers.go:580]     Audit-Id: 493aec35-de1b-4883-a697-9ab8d5871065
	I0624 05:26:54.382010    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.382231    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.382231    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.382231    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.382420    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.382940    6684 pod_ready.go:92] pod "kube-apiserver-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.382940    6684 pod_ready.go:81] duration metric: took 12.294ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.382993    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.383066    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:26:54.383123    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.383123    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.383123    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.385111    6684 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:26:54.385776    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.385776    6684 round_trippers.go:580]     Audit-Id: addf866d-0681-464f-bf8a-c26eff182d73
	I0624 05:26:54.385776    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.385776    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.385776    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.385839    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.385839    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.386177    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"383","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0624 05:26:54.386826    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.386906    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.386906    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.386906    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.392813    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:26:54.392855    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.392855    6684 round_trippers.go:580]     Audit-Id: 8d65f5f1-14c3-47e2-85b6-433ec8040782
	I0624 05:26:54.392855    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.392855    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.392855    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.392855    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.392855    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.392855    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.393463    6684 pod_ready.go:92] pod "kube-controller-manager-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.393463    6684 pod_ready.go:81] duration metric: took 10.4703ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.393463    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.393463    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:26:54.393463    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.393463    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.393463    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.395625    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:54.395625    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.395625    6684 round_trippers.go:580]     Audit-Id: 3cd962d6-8aec-4b7c-bd72-9325510239e7
	I0624 05:26:54.395625    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.395625    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.395625    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.395625    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.395625    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.396645    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"378","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0624 05:26:54.396645    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.396645    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.396645    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.396645    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.398615    6684 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:26:54.399704    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.399704    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.399704    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.399704    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.399704    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.399704    6684 round_trippers.go:580]     Audit-Id: 29032931-cab0-4465-ba47-759fdf17a2b2
	I0624 05:26:54.399704    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.399704    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.399704    6684 pod_ready.go:92] pod "kube-proxy-lcc9v" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.399704    6684 pod_ready.go:81] duration metric: took 6.2405ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.399704    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.551329    6684 request.go:629] Waited for 151.4288ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:26:54.551470    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:26:54.551470    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.551470    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.551470    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.555180    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:54.555180    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.555622    6684 round_trippers.go:580]     Audit-Id: fd4cbca5-fada-4094-b692-71c92a4c2b2d
	I0624 05:26:54.555622    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.555622    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.555622    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.555622    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.555622    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.556098    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"385","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0624 05:26:54.753032    6684 request.go:629] Waited for 196.3655ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.753284    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:26:54.753284    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.753363    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.753363    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.756570    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:54.756570    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.757272    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.757272    6684 round_trippers.go:580]     Audit-Id: 7626f7a6-7c5c-45e6-b21f-0e400b064573
	I0624 05:26:54.757272    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.757272    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.757272    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.757272    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.757644    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0624 05:26:54.758276    6684 pod_ready.go:92] pod "kube-scheduler-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:26:54.758416    6684 pod_ready.go:81] duration metric: took 358.7109ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:26:54.758416    6684 pod_ready.go:38] duration metric: took 2.4368331s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:26:54.758478    6684 api_server.go:52] waiting for apiserver process to appear ...
	I0624 05:26:54.771435    6684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:26:54.799224    6684 command_runner.go:130] > 1946
	I0624 05:26:54.799573    6684 api_server.go:72] duration metric: took 13.8267422s to wait for apiserver process to appear ...
	I0624 05:26:54.799573    6684 api_server.go:88] waiting for apiserver healthz status ...
	I0624 05:26:54.799704    6684 api_server.go:253] Checking apiserver healthz at https://172.31.211.219:8443/healthz ...
	I0624 05:26:54.807324    6684 api_server.go:279] https://172.31.211.219:8443/healthz returned 200:
	ok
	I0624 05:26:54.807639    6684 round_trippers.go:463] GET https://172.31.211.219:8443/version
	I0624 05:26:54.807639    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.807639    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.807639    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.809501    6684 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:26:54.809501    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.809501    6684 round_trippers.go:580]     Audit-Id: bc034c15-2c2d-4f0b-9d2d-bc2bce90cd5b
	I0624 05:26:54.809501    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.809501    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.809501    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.809501    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.809501    6684 round_trippers.go:580]     Content-Length: 263
	I0624 05:26:54.810008    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.810008    6684 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0624 05:26:54.810070    6684 api_server.go:141] control plane version: v1.30.2
	I0624 05:26:54.810353    6684 api_server.go:131] duration metric: took 10.7793ms to wait for apiserver health ...
	I0624 05:26:54.810353    6684 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 05:26:54.956216    6684 request.go:629] Waited for 145.4902ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:26:54.956216    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:26:54.956216    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:54.956216    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:54.956216    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:54.960809    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:26:54.960809    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:54.960809    6684 round_trippers.go:580]     Audit-Id: 4359e219-2237-43a4-9ce8-4ed5bdfa5fa3
	I0624 05:26:54.960809    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:54.960809    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:54.960809    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:54.961769    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:54.961769    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:54 GMT
	I0624 05:26:54.963010    6684 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"420","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0624 05:26:54.966174    6684 system_pods.go:59] 8 kube-system pods found
	I0624 05:26:54.966258    6684 system_pods.go:61] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "etcd-multinode-876600" [4906666c-eed2-4f7c-a011-5a9b589fdcdc] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "kube-apiserver-multinode-876600" [52a7f191-9dd7-4dcd-8e9e-d05deeac2349] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:26:54.966258    6684 system_pods.go:61] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:26:54.966258    6684 system_pods.go:74] duration metric: took 155.8506ms to wait for pod list to return data ...
	I0624 05:26:54.966411    6684 default_sa.go:34] waiting for default service account to be created ...
	I0624 05:26:55.160200    6684 request.go:629] Waited for 193.39ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/default/serviceaccounts
	I0624 05:26:55.160511    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/default/serviceaccounts
	I0624 05:26:55.160599    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:55.160599    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:55.160599    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:55.164522    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:55.164586    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:55.164586    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:55.164586    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:55.164586    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:55.164586    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:55.164586    6684 round_trippers.go:580]     Content-Length: 261
	I0624 05:26:55.164586    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:55 GMT
	I0624 05:26:55.164586    6684 round_trippers.go:580]     Audit-Id: 1cc97b35-d7d1-41ee-9f19-3f5ad53605f4
	I0624 05:26:55.164653    6684 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b646e33d-a735-486e-bc23-8dd57a7f6b3f","resourceVersion":"332","creationTimestamp":"2024-06-24T12:26:40Z"}}]}
	I0624 05:26:55.164998    6684 default_sa.go:45] found service account: "default"
	I0624 05:26:55.165116    6684 default_sa.go:55] duration metric: took 198.7043ms for default service account to be created ...
	I0624 05:26:55.165116    6684 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 05:26:55.349566    6684 request.go:629] Waited for 184.3285ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:26:55.349781    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:26:55.349781    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:55.349781    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:55.349836    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:55.353446    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:26:55.354445    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:55.354468    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:55.354468    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:55.354468    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:55.354468    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:55 GMT
	I0624 05:26:55.354468    6684 round_trippers.go:580]     Audit-Id: cd6b8a8e-773b-4239-8b90-1eacec4b1e26
	I0624 05:26:55.354468    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:55.356814    6684 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"420","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0624 05:26:55.359787    6684 system_pods.go:86] 8 kube-system pods found
	I0624 05:26:55.359884    6684 system_pods.go:89] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:26:55.359884    6684 system_pods.go:89] "etcd-multinode-876600" [4906666c-eed2-4f7c-a011-5a9b589fdcdc] Running
	I0624 05:26:55.359884    6684 system_pods.go:89] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:26:55.359954    6684 system_pods.go:89] "kube-apiserver-multinode-876600" [52a7f191-9dd7-4dcd-8e9e-d05deeac2349] Running
	I0624 05:26:55.359954    6684 system_pods.go:89] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:26:55.359954    6684 system_pods.go:89] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:26:55.359954    6684 system_pods.go:89] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:26:55.359954    6684 system_pods.go:89] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:26:55.360018    6684 system_pods.go:126] duration metric: took 194.9011ms to wait for k8s-apps to be running ...
	I0624 05:26:55.360018    6684 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 05:26:55.373745    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:26:55.400563    6684 system_svc.go:56] duration metric: took 40.545ms WaitForService to wait for kubelet
	I0624 05:26:55.400563    6684 kubeadm.go:576] duration metric: took 14.4277298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:26:55.400717    6684 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:26:55.555649    6684 request.go:629] Waited for 154.6328ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes
	I0624 05:26:55.555649    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes
	I0624 05:26:55.555909    6684 round_trippers.go:469] Request Headers:
	I0624 05:26:55.555909    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:26:55.555909    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:26:55.559483    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:26:55.559558    6684 round_trippers.go:577] Response Headers:
	I0624 05:26:55.559558    6684 round_trippers.go:580]     Audit-Id: 94ee7657-222a-4a28-ac00-1da44553912e
	I0624 05:26:55.559624    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:26:55.559624    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:26:55.559624    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:26:55.559624    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:26:55.559624    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:26:55 GMT
	I0624 05:26:55.559624    6684 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"404","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0624 05:26:55.560729    6684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:26:55.560729    6684 node_conditions.go:123] node cpu capacity is 2
	I0624 05:26:55.560857    6684 node_conditions.go:105] duration metric: took 160.1389ms to run NodePressure ...
	I0624 05:26:55.560857    6684 start.go:240] waiting for startup goroutines ...
	I0624 05:26:55.560857    6684 start.go:245] waiting for cluster config update ...
	I0624 05:26:55.560857    6684 start.go:254] writing updated cluster config ...
	I0624 05:26:55.564952    6684 out.go:177] 
	I0624 05:26:55.568111    6684 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:26:55.576299    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:26:55.576299    6684 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:26:55.582821    6684 out.go:177] * Starting "multinode-876600-m02" worker node in "multinode-876600" cluster
	I0624 05:26:55.587048    6684 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:26:55.587048    6684 cache.go:56] Caching tarball of preloaded images
	I0624 05:26:55.587048    6684 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:26:55.587824    6684 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:26:55.588855    6684 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:26:55.592022    6684 start.go:360] acquireMachinesLock for multinode-876600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:26:55.592022    6684 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-876600-m02"
	I0624 05:26:55.592716    6684 start.go:93] Provisioning new machine with config: &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0624 05:26:55.592716    6684 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0624 05:26:55.596315    6684 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0624 05:26:55.596315    6684 start.go:159] libmachine.API.Create for "multinode-876600" (driver="hyperv")
	I0624 05:26:55.596315    6684 client.go:168] LocalClient.Create starting
	I0624 05:26:55.597055    6684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0624 05:26:55.597699    6684 main.go:141] libmachine: Decoding PEM data...
	I0624 05:26:55.597699    6684 main.go:141] libmachine: Parsing certificate...
	I0624 05:26:55.597699    6684 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0624 05:26:55.597699    6684 main.go:141] libmachine: Decoding PEM data...
	I0624 05:26:55.597699    6684 main.go:141] libmachine: Parsing certificate...
	I0624 05:26:55.597699    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0624 05:26:57.485136    6684 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0624 05:26:57.485210    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:57.485260    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0624 05:26:59.209398    6684 main.go:141] libmachine: [stdout =====>] : False
	
	I0624 05:26:59.210461    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:26:59.210461    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 05:27:00.720473    6684 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 05:27:00.720473    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:00.720583    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 05:27:04.468803    6684 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 05:27:04.469091    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:04.471236    6684 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0624 05:27:04.971275    6684 main.go:141] libmachine: Creating SSH key...
	I0624 05:27:05.312732    6684 main.go:141] libmachine: Creating VM...
	I0624 05:27:05.312732    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0624 05:27:08.308146    6684 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0624 05:27:08.308146    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:08.308146    6684 main.go:141] libmachine: Using switch "Default Switch"
	I0624 05:27:08.308146    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0624 05:27:10.113206    6684 main.go:141] libmachine: [stdout =====>] : True
	
	I0624 05:27:10.114215    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:10.114215    6684 main.go:141] libmachine: Creating VHD
	I0624 05:27:10.114314    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0624 05:27:13.930814    6684 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 88E845AF-608F-4304-903C-EE9D29C3B382
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0624 05:27:13.930814    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:13.930942    6684 main.go:141] libmachine: Writing magic tar header
	I0624 05:27:13.930942    6684 main.go:141] libmachine: Writing SSH key tar header
	I0624 05:27:13.939514    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0624 05:27:17.148582    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:17.149094    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:17.149157    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\disk.vhd' -SizeBytes 20000MB
	I0624 05:27:19.787771    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:19.787771    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:19.788183    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-876600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0624 05:27:23.514728    6684 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-876600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0624 05:27:23.515584    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:23.515584    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-876600-m02 -DynamicMemoryEnabled $false
	I0624 05:27:25.844390    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:25.844566    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:25.844566    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-876600-m02 -Count 2
	I0624 05:27:28.087310    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:28.088058    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:28.088058    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-876600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\boot2docker.iso'
	I0624 05:27:30.729367    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:30.729367    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:30.729367    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-876600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\disk.vhd'
	I0624 05:27:33.433769    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:33.434084    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:33.434084    6684 main.go:141] libmachine: Starting VM...
	I0624 05:27:33.434084    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600-m02
	I0624 05:27:36.538159    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:36.538726    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:36.538726    6684 main.go:141] libmachine: Waiting for host to start...
	I0624 05:27:36.538726    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:27:38.891136    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:27:38.891190    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:38.891190    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:27:41.500918    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:41.500918    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:42.508388    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:27:44.771305    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:27:44.772357    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:44.772357    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:27:47.456650    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:47.456650    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:48.461750    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:27:50.789251    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:27:50.789967    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:50.790035    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:27:53.391891    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:53.391891    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:54.403267    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:27:56.698140    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:27:56.698140    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:27:56.698782    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:27:59.290519    6684 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:27:59.290688    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:00.293063    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:02.612251    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:02.612349    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:02.612409    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:05.391241    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:05.391241    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:05.391443    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:07.625401    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:07.625463    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:07.625463    6684 machine.go:94] provisionDockerMachine start ...
	I0624 05:28:07.625463    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:09.886479    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:09.886479    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:09.887322    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:12.522054    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:12.522110    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:12.531913    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:12.541826    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:12.541826    6684 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:28:12.669444    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:28:12.669444    6684 buildroot.go:166] provisioning hostname "multinode-876600-m02"
	I0624 05:28:12.669611    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:14.912946    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:14.912946    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:14.912946    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:17.591974    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:17.591974    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:17.597991    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:17.598829    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:17.598829    6684 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600-m02 && echo "multinode-876600-m02" | sudo tee /etc/hostname
	I0624 05:28:17.754839    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600-m02
	
	I0624 05:28:17.755425    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:19.957402    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:19.958163    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:19.958274    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:22.586189    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:22.587185    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:22.593456    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:22.593456    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:22.594030    6684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:28:22.729179    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:28:22.729179    6684 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:28:22.729307    6684 buildroot.go:174] setting up certificates
	I0624 05:28:22.729307    6684 provision.go:84] configureAuth start
	I0624 05:28:22.729415    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:24.970055    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:24.970055    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:24.970055    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:27.563653    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:27.563717    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:27.563717    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:29.791158    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:29.791158    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:29.791250    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:32.475192    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:32.475192    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:32.475192    6684 provision.go:143] copyHostCerts
	I0624 05:28:32.476055    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:28:32.476366    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:28:32.476366    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:28:32.476865    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:28:32.477882    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:28:32.477882    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:28:32.477882    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:28:32.478728    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:28:32.479699    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:28:32.479958    6684 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:28:32.479958    6684 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:28:32.480448    6684 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:28:32.481251    6684 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600-m02 san=[127.0.0.1 172.31.221.199 localhost minikube multinode-876600-m02]
	I0624 05:28:32.746645    6684 provision.go:177] copyRemoteCerts
	I0624 05:28:32.759182    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:28:32.759182    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:35.027858    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:35.027858    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:35.028361    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:37.703024    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:37.703084    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:37.703084    6684 sshutil.go:53] new ssh client: &{IP:172.31.221.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:28:37.804088    6684 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.044887s)
	I0624 05:28:37.804088    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:28:37.804088    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:28:37.858513    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:28:37.858706    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0624 05:28:37.912512    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:28:37.913163    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 05:28:37.960022    6684 provision.go:87] duration metric: took 15.2306558s to configureAuth
	I0624 05:28:37.960022    6684 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:28:37.960022    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:28:37.960022    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:40.199440    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:40.199440    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:40.200480    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:42.836196    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:42.837035    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:42.842345    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:42.843108    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:42.843108    6684 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:28:42.965604    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:28:42.965707    6684 buildroot.go:70] root file system type: tmpfs
	I0624 05:28:42.965869    6684 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:28:42.965869    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:45.170761    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:45.170761    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:45.171052    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:47.838202    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:47.838443    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:47.843640    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:47.844527    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:47.844527    6684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.211.219"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:28:48.001185    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.211.219
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:28:48.001185    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:50.227716    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:50.227888    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:50.227979    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:52.890094    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:52.890168    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:52.896216    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:28:52.896426    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:28:52.896426    6684 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:28:55.107721    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:28:55.107800    6684 machine.go:97] duration metric: took 47.4821535s to provisionDockerMachine
	I0624 05:28:55.107800    6684 client.go:171] duration metric: took 1m59.5110215s to LocalClient.Create
	I0624 05:28:55.107800    6684 start.go:167] duration metric: took 1m59.5110215s to libmachine.API.Create "multinode-876600"
	I0624 05:28:55.107887    6684 start.go:293] postStartSetup for "multinode-876600-m02" (driver="hyperv")
	I0624 05:28:55.107887    6684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:28:55.121425    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:28:55.121425    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:28:57.297186    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:28:57.297942    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:57.297942    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:28:59.936376    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:28:59.936884    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:28:59.936953    6684 sshutil.go:53] new ssh client: &{IP:172.31.221.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:29:00.054270    6684 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9327123s)
	I0624 05:29:00.071403    6684 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:29:00.078918    6684 command_runner.go:130] > NAME=Buildroot
	I0624 05:29:00.079253    6684 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:29:00.079253    6684 command_runner.go:130] > ID=buildroot
	I0624 05:29:00.079253    6684 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:29:00.079253    6684 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:29:00.079387    6684 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:29:00.079443    6684 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:29:00.079856    6684 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:29:00.080860    6684 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:29:00.080860    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:29:00.093424    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:29:00.112530    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:29:00.156589    6684 start.go:296] duration metric: took 5.0486827s for postStartSetup
	I0624 05:29:00.160460    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:02.357077    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:02.357352    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:02.357352    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:04.990199    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:04.990400    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:04.990596    6684 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:29:04.993200    6684 start.go:128] duration metric: took 2m9.3999361s to createHost
	I0624 05:29:04.993246    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:07.201619    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:07.201990    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:07.202092    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:09.831773    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:09.831832    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:09.838387    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:29:09.838387    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:29:09.839007    6684 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 05:29:09.964756    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719232149.958476232
	
	I0624 05:29:09.964849    6684 fix.go:216] guest clock: 1719232149.958476232
	I0624 05:29:09.964849    6684 fix.go:229] Guest: 2024-06-24 05:29:09.958476232 -0700 PDT Remote: 2024-06-24 05:29:04.993246 -0700 PDT m=+345.890515101 (delta=4.965230232s)
	I0624 05:29:09.964944    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:12.166562    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:12.166562    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:12.167022    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:14.827300    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:14.827300    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:14.834354    6684 main.go:141] libmachine: Using SSH client type: native
	I0624 05:29:14.834945    6684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.221.199 22 <nil> <nil>}
	I0624 05:29:14.835165    6684 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719232149
	I0624 05:29:14.974867    6684 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:29:09 UTC 2024
	
	I0624 05:29:14.974931    6684 fix.go:236] clock set: Mon Jun 24 12:29:09 UTC 2024
	 (err=<nil>)
	I0624 05:29:14.974931    6684 start.go:83] releasing machines lock for "multinode-876600-m02", held for 2m19.3823702s
	I0624 05:29:14.975266    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:17.213464    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:17.213464    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:17.213464    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:19.845488    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:19.845546    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:19.848103    6684 out.go:177] * Found network options:
	I0624 05:29:19.851186    6684 out.go:177]   - NO_PROXY=172.31.211.219
	W0624 05:29:19.853574    6684 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:29:19.855587    6684 out.go:177]   - NO_PROXY=172.31.211.219
	W0624 05:29:19.857912    6684 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 05:29:19.859814    6684 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:29:19.862128    6684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:29:19.862128    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:19.873112    6684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:29:19.873112    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:29:22.133100    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:22.133328    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:22.133328    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:22.133933    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:22.133933    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:22.133933    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:24.851020    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:24.851269    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:24.851476    6684 sshutil.go:53] new ssh client: &{IP:172.31.221.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:29:24.875392    6684 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:29:24.875392    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:24.876709    6684 sshutil.go:53] new ssh client: &{IP:172.31.221.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:29:24.937802    6684 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0624 05:29:24.938758    6684 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0656268s)
	W0624 05:29:24.938879    6684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:29:24.950890    6684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:29:25.060651    6684 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:29:25.061895    6684 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:29:25.061895    6684 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:29:25.061895    6684 start.go:494] detecting cgroup driver to use...
	I0624 05:29:25.062010    6684 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1998614s)
	I0624 05:29:25.062140    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:29:25.100635    6684 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:29:25.113955    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:29:25.147726    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:29:25.169332    6684 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:29:25.181984    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:29:25.211951    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:29:25.241723    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:29:25.273982    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:29:25.306046    6684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:29:25.337829    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:29:25.371013    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:29:25.402970    6684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:29:25.434469    6684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:29:25.452177    6684 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:29:25.464608    6684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:29:25.494363    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:25.711442    6684 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:29:25.743689    6684 start.go:494] detecting cgroup driver to use...
	I0624 05:29:25.757098    6684 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:29:25.781881    6684 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:29:25.781881    6684 command_runner.go:130] > [Unit]
	I0624 05:29:25.781881    6684 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:29:25.781881    6684 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:29:25.781881    6684 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:29:25.781881    6684 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:29:25.781881    6684 command_runner.go:130] > StartLimitBurst=3
	I0624 05:29:25.781881    6684 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:29:25.781881    6684 command_runner.go:130] > [Service]
	I0624 05:29:25.781881    6684 command_runner.go:130] > Type=notify
	I0624 05:29:25.781881    6684 command_runner.go:130] > Restart=on-failure
	I0624 05:29:25.781881    6684 command_runner.go:130] > Environment=NO_PROXY=172.31.211.219
	I0624 05:29:25.781881    6684 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:29:25.781881    6684 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:29:25.781881    6684 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:29:25.781881    6684 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:29:25.781881    6684 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:29:25.781881    6684 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:29:25.781881    6684 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:29:25.781881    6684 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:29:25.781881    6684 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:29:25.781881    6684 command_runner.go:130] > ExecStart=
	I0624 05:29:25.781881    6684 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:29:25.781881    6684 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:29:25.781881    6684 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:29:25.781881    6684 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:29:25.781881    6684 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:29:25.781881    6684 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:29:25.781881    6684 command_runner.go:130] > LimitCORE=infinity
	I0624 05:29:25.781881    6684 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:29:25.781881    6684 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:29:25.781881    6684 command_runner.go:130] > TasksMax=infinity
	I0624 05:29:25.781881    6684 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:29:25.781881    6684 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:29:25.782409    6684 command_runner.go:130] > Delegate=yes
	I0624 05:29:25.782490    6684 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:29:25.782490    6684 command_runner.go:130] > KillMode=process
	I0624 05:29:25.782490    6684 command_runner.go:130] > [Install]
	I0624 05:29:25.782490    6684 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:29:25.797612    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:29:25.831543    6684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:29:25.900079    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:29:25.939079    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:29:25.975180    6684 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:29:26.044786    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:29:26.075943    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:29:26.114361    6684 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:29:26.127305    6684 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:29:26.134317    6684 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:29:26.144852    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:29:26.168424    6684 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:29:26.219098    6684 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:29:26.423808    6684 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:29:26.613548    6684 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:29:26.613632    6684 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:29:26.657189    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:26.865725    6684 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:29:29.416303    6684 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5505336s)
	I0624 05:29:29.428973    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:29:29.466400    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:29:29.503682    6684 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:29:29.704531    6684 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:29:29.908745    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:30.123782    6684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:29:30.168921    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:29:30.209189    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:30.418477    6684 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:29:30.531630    6684 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:29:30.545542    6684 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:29:30.554877    6684 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:29:30.554977    6684 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:29:30.554977    6684 command_runner.go:130] > Device: 0,22	Inode: 892         Links: 1
	I0624 05:29:30.554977    6684 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:29:30.554977    6684 command_runner.go:130] > Access: 2024-06-24 12:29:30.445137477 +0000
	I0624 05:29:30.554977    6684 command_runner.go:130] > Modify: 2024-06-24 12:29:30.445137477 +0000
	I0624 05:29:30.555042    6684 command_runner.go:130] > Change: 2024-06-24 12:29:30.449137368 +0000
	I0624 05:29:30.555042    6684 command_runner.go:130] >  Birth: -
	I0624 05:29:30.555177    6684 start.go:562] Will wait 60s for crictl version
	I0624 05:29:30.567658    6684 ssh_runner.go:195] Run: which crictl
	I0624 05:29:30.573586    6684 command_runner.go:130] > /usr/bin/crictl
	I0624 05:29:30.583583    6684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:29:30.644694    6684 command_runner.go:130] > Version:  0.1.0
	I0624 05:29:30.644694    6684 command_runner.go:130] > RuntimeName:  docker
	I0624 05:29:30.644694    6684 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:29:30.644694    6684 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:29:30.644694    6684 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:29:30.654289    6684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:29:30.690280    6684 command_runner.go:130] > 26.1.4
	I0624 05:29:30.700795    6684 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:29:30.732722    6684 command_runner.go:130] > 26.1.4
	I0624 05:29:30.736653    6684 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:29:30.738865    6684 out.go:177]   - env NO_PROXY=172.31.211.219
	I0624 05:29:30.740561    6684 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:29:30.745504    6684 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:29:30.745504    6684 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:29:30.745504    6684 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:29:30.745504    6684 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:29:30.749067    6684 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:29:30.749067    6684 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:29:30.763085    6684 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:29:30.770631    6684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:29:30.791051    6684 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:29:30.792610    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:29:30.793460    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:29:32.971177    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:32.971432    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:32.971432    6684 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:29:32.972284    6684 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.221.199
	I0624 05:29:32.972370    6684 certs.go:194] generating shared ca certs ...
	I0624 05:29:32.972370    6684 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:29:32.973011    6684 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:29:32.973364    6684 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:29:32.973364    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:29:32.973364    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:29:32.973911    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:29:32.974020    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:29:32.974681    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:29:32.974961    6684 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:29:32.975128    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:29:32.975430    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:29:32.975752    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:29:32.976037    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:29:32.976421    6684 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:29:32.976421    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:29:32.976421    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:29:32.977130    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:29:32.977413    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:29:33.024646    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:29:33.077450    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:29:33.125982    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:29:33.176580    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:29:33.222215    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:29:33.267609    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:29:33.323751    6684 ssh_runner.go:195] Run: openssl version
	I0624 05:29:33.333581    6684 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:29:33.345913    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:29:33.379263    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:29:33.386044    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:29:33.386104    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:29:33.399038    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:29:33.406894    6684 command_runner.go:130] > 3ec20f2e
	I0624 05:29:33.418633    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:29:33.450029    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:29:33.482440    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:29:33.490285    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:29:33.490285    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:29:33.501748    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:29:33.508836    6684 command_runner.go:130] > b5213941
	I0624 05:29:33.521905    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:29:33.553601    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:29:33.587267    6684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:29:33.594460    6684 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:29:33.595344    6684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:29:33.607602    6684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:29:33.619778    6684 command_runner.go:130] > 51391683
	I0624 05:29:33.632575    6684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:29:33.667328    6684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:29:33.673283    6684 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:29:33.673283    6684 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:29:33.673283    6684 kubeadm.go:928] updating node {m02 172.31.221.199 8443 v1.30.2 docker false true} ...
	I0624 05:29:33.673283    6684 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.221.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:29:33.686955    6684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:29:33.706571    6684 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	I0624 05:29:33.706734    6684 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0624 05:29:33.719465    6684 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0624 05:29:33.737944    6684 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0624 05:29:33.737944    6684 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0624 05:29:33.737944    6684 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0624 05:29:33.737944    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 05:29:33.738470    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 05:29:33.753707    6684 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0624 05:29:33.754749    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:29:33.759230    6684 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0624 05:29:33.762657    6684 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 05:29:33.762657    6684 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0624 05:29:33.762657    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0624 05:29:33.791807    6684 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 05:29:33.791874    6684 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 05:29:33.791874    6684 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0624 05:29:33.791874    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0624 05:29:33.804657    6684 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0624 05:29:33.868635    6684 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 05:29:33.869192    6684 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0624 05:29:33.869415    6684 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0624 05:29:35.113336    6684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0624 05:29:35.134105    6684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0624 05:29:35.166485    6684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:29:35.212532    6684 ssh_runner.go:195] Run: grep 172.31.211.219	control-plane.minikube.internal$ /etc/hosts
	I0624 05:29:35.217708    6684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.211.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:29:35.257967    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:35.476326    6684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:29:35.508615    6684 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:29:35.510301    6684 start.go:316] joinCluster: &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:29:35.510533    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0624 05:29:35.510533    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:29:37.740161    6684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:29:37.740201    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:37.740201    6684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:29:40.377226    6684 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:29:40.378299    6684 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:29:40.378508    6684 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:29:40.582249    6684 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9tgu8u.t5vbzqziet6nm6kh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 
	I0624 05:29:40.582961    6684 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0724084s)
	I0624 05:29:40.583073    6684 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0624 05:29:40.583073    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9tgu8u.t5vbzqziet6nm6kh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-876600-m02"
	I0624 05:29:40.654210    6684 command_runner.go:130] > [preflight] Running pre-flight checks
	I0624 05:29:40.844888    6684 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0624 05:29:40.844888    6684 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0624 05:29:40.919902    6684 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 05:29:40.919902    6684 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 05:29:40.919902    6684 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0624 05:29:41.134330    6684 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0624 05:29:42.137986    6684 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.004211778s
	I0624 05:29:42.138103    6684 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0624 05:29:42.168425    6684 command_runner.go:130] > This node has joined the cluster:
	I0624 05:29:42.168425    6684 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0624 05:29:42.169059    6684 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0624 05:29:42.169059    6684 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0624 05:29:42.172294    6684 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0624 05:29:42.172836    6684 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9tgu8u.t5vbzqziet6nm6kh --discovery-token-ca-cert-hash sha256:9a85d7d91d8815b9524d6d67b3de4772253bb6b4646bec475d981b311477f5e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-876600-m02": (1.5892156s)
	I0624 05:29:42.172836    6684 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0624 05:29:42.397073    6684 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0624 05:29:42.609721    6684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-876600-m02 minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec minikube.k8s.io/name=multinode-876600 minikube.k8s.io/primary=false
	I0624 05:29:42.746445    6684 command_runner.go:130] > node/multinode-876600-m02 labeled
	I0624 05:29:42.746592    6684 start.go:318] duration metric: took 7.2362635s to joinCluster
	I0624 05:29:42.746735    6684 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0624 05:29:42.747489    6684 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:29:42.751770    6684 out.go:177] * Verifying Kubernetes components...
	I0624 05:29:42.765386    6684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:29:43.001049    6684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:29:43.029375    6684 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:29:43.029661    6684 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.211.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:29:43.030816    6684 node_ready.go:35] waiting up to 6m0s for node "multinode-876600-m02" to be "Ready" ...
	I0624 05:29:43.031099    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:43.031099    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:43.031145    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:43.031145    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:43.044353    6684 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0624 05:29:43.044353    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:43.044353    6684 round_trippers.go:580]     Audit-Id: 14c57f1e-3aea-4deb-8dbd-39efe2962ea6
	I0624 05:29:43.044353    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:43.044826    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:43.044826    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:43.044826    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:43.044826    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:43.044826    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:43 GMT
	I0624 05:29:43.044919    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:43.540485    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:43.540574    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:43.540574    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:43.540574    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:43.544233    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:43.544233    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:43.544649    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:43.544649    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:43.544649    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:43.544649    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:43.544707    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:43.544707    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:43 GMT
	I0624 05:29:43.544707    6684 round_trippers.go:580]     Audit-Id: 4b0853f9-82ad-4868-bdf2-dfb54e79dcaa
	I0624 05:29:43.544883    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:44.044490    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:44.044490    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:44.044490    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:44.044490    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:44.048662    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:44.048662    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:44.048662    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:44.048662    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:44 GMT
	I0624 05:29:44.048662    6684 round_trippers.go:580]     Audit-Id: 084ae0a0-f0c2-4d1a-9038-97bdbc8fb45f
	I0624 05:29:44.048662    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:44.048662    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:44.048662    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:44.048662    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:44.048662    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:44.545583    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:44.545583    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:44.545583    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:44.545583    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:44.550217    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:44.550945    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:44.550945    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:44 GMT
	I0624 05:29:44.550945    6684 round_trippers.go:580]     Audit-Id: 1e9ef34f-4187-4c0e-821a-6e22dad5b607
	I0624 05:29:44.550945    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:44.550945    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:44.550945    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:44.550945    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:44.550945    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:44.551222    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:45.045165    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:45.045369    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:45.045369    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:45.045369    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:45.049214    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:45.049748    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:45.049748    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:45.049748    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:45.049748    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:45.049823    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:45.049823    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:45.049823    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:45 GMT
	I0624 05:29:45.049823    6684 round_trippers.go:580]     Audit-Id: d57d093e-aa37-49ec-90ab-863ee80f4f75
	I0624 05:29:45.049998    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:45.050078    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:45.546154    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:45.546381    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:45.546381    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:45.546381    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:45.550671    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:45.550671    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:45.550671    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:45.550757    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:45.550757    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:45.550757    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:45.550757    6684 round_trippers.go:580]     Content-Length: 3921
	I0624 05:29:45.550757    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:45 GMT
	I0624 05:29:45.550757    6684 round_trippers.go:580]     Audit-Id: b9bab525-8c75-4db5-8fa7-455da482a594
	I0624 05:29:45.550850    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"586","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0624 05:29:46.034585    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:46.034814    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:46.034814    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:46.034814    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:46.047177    6684 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0624 05:29:46.047177    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:46.047177    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:46 GMT
	I0624 05:29:46.047177    6684 round_trippers.go:580]     Audit-Id: 77b195d5-5167-49d9-9391-c7d41d559d37
	I0624 05:29:46.047177    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:46.047926    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:46.047926    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:46.047926    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:46.047926    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:46.047982    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:46.534287    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:46.534374    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:46.534374    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:46.534374    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:46.537797    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:46.537797    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:46.537797    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:46.537797    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:46 GMT
	I0624 05:29:46.537797    6684 round_trippers.go:580]     Audit-Id: 2796bc96-28e9-40e1-9aa1-db8c45d2eb18
	I0624 05:29:46.537797    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:46.537797    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:46.537797    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:46.537797    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:46.538876    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:47.038485    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:47.038485    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:47.038485    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:47.038485    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:47.043739    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:29:47.043887    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:47.043887    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:47.043887    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:47 GMT
	I0624 05:29:47.043887    6684 round_trippers.go:580]     Audit-Id: ad99a81f-8d93-4a29-8fbb-caa7028279cb
	I0624 05:29:47.043887    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:47.043887    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:47.043887    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:47.043887    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:47.043887    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:47.544080    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:47.544080    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:47.544080    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:47.544080    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:47.548681    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:47.548681    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:47.548681    6684 round_trippers.go:580]     Audit-Id: 30ed2c0f-e826-480d-9b05-d3c58f6b9471
	I0624 05:29:47.548681    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:47.548681    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:47.548681    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:47.548681    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:47.549442    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:47.549442    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:47 GMT
	I0624 05:29:47.549576    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:47.549576    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:48.034453    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:48.034675    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:48.034675    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:48.034675    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:48.042228    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:29:48.042228    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:48.042228    6684 round_trippers.go:580]     Audit-Id: b1b669ce-319d-469a-aa70-d1538e9ac421
	I0624 05:29:48.042228    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:48.042228    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:48.042228    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:48.043094    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:48.043094    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:48.043094    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:48 GMT
	I0624 05:29:48.043094    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:48.536911    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:48.537019    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:48.537646    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:48.537646    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:48.542488    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:48.543250    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:48.543250    6684 round_trippers.go:580]     Audit-Id: 0289e524-b231-4432-b39e-947c7796a457
	I0624 05:29:48.543319    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:48.543424    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:48.543424    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:48.543512    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:48.543512    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:48.543512    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:48 GMT
	I0624 05:29:48.543727    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:49.045147    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:49.045147    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:49.045147    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:49.045147    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:49.049785    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:49.049785    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:49.050109    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:49.050109    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:49.050109    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:49.050109    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:49.050109    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:49 GMT
	I0624 05:29:49.050179    6684 round_trippers.go:580]     Audit-Id: 8d0d51c2-477b-464f-8f4a-151d3f430ac8
	I0624 05:29:49.050256    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:49.050400    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:49.541507    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:49.541798    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:49.541798    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:49.541879    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:49.548038    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:29:49.548122    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:49.548122    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:49 GMT
	I0624 05:29:49.548122    6684 round_trippers.go:580]     Audit-Id: 34d3e897-4073-4010-bebf-94aad21cd57a
	I0624 05:29:49.548122    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:49.548122    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:49.548122    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:49.548122    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:49.548122    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:49.548122    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:50.034815    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:50.034880    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:50.034880    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:50.034880    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:50.039448    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:50.039448    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:50.039448    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:50.039448    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:50 GMT
	I0624 05:29:50.039533    6684 round_trippers.go:580]     Audit-Id: a8d019db-ca9a-475d-9e27-ddd13d852902
	I0624 05:29:50.039533    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:50.039533    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:50.039533    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:50.039572    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:50.039572    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:50.040015    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:50.542295    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:50.542295    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:50.542295    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:50.542295    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:50.547503    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:29:50.547503    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:50.547503    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:50.547570    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:50 GMT
	I0624 05:29:50.547570    6684 round_trippers.go:580]     Audit-Id: 0b34cea7-aecb-48fd-b378-69c3a2fa44b9
	I0624 05:29:50.547570    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:50.547570    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:50.547570    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:50.547570    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:50.547786    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:51.038305    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:51.038305    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:51.038305    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:51.038305    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:51.042934    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:51.043088    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:51.043088    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:51 GMT
	I0624 05:29:51.043088    6684 round_trippers.go:580]     Audit-Id: 3a6dcb17-0b99-4b4c-91e8-4a02fd98961d
	I0624 05:29:51.043088    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:51.043088    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:51.043088    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:51.043088    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:51.043088    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:51.043331    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:51.545781    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:51.545781    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:51.545781    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:51.545781    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:51.550361    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:51.550361    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:51.550361    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:51.550361    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:51.550361    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:51.550361    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:51.550361    6684 round_trippers.go:580]     Content-Length: 4030
	I0624 05:29:51.550361    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:51 GMT
	I0624 05:29:51.550361    6684 round_trippers.go:580]     Audit-Id: 78df761d-e44a-4285-8a0a-8f660f48088c
	I0624 05:29:51.550361    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"593","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0624 05:29:52.037353    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:52.037353    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:52.037353    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:52.037353    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:52.133653    6684 round_trippers.go:574] Response Status: 200 OK in 96 milliseconds
	I0624 05:29:52.133711    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:52.133711    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:52.133711    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:52 GMT
	I0624 05:29:52.133711    6684 round_trippers.go:580]     Audit-Id: c1f68424-b76c-48ba-a292-77fdb8f6db10
	I0624 05:29:52.133711    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:52.133711    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:52.133711    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:52.133711    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:52.133711    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:52.538516    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:52.538595    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:52.538595    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:52.538595    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:52.543934    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:29:52.543934    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:52.543934    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:52.543934    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:52.543934    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:52 GMT
	I0624 05:29:52.543934    6684 round_trippers.go:580]     Audit-Id: 684f43b2-8d71-45a7-8ea6-6faa3737f223
	I0624 05:29:52.543934    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:52.543934    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:52.545026    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:53.039185    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:53.039328    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:53.039328    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:53.039328    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:53.044932    6684 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:29:53.045084    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:53.045105    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:53 GMT
	I0624 05:29:53.045105    6684 round_trippers.go:580]     Audit-Id: acb9e1ff-e734-4279-84f1-dfddfafd75df
	I0624 05:29:53.045105    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:53.045105    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:53.045105    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:53.045105    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:53.045516    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:53.544374    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:53.544597    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:53.544597    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:53.544691    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:53.551868    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:29:53.551868    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:53.551868    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:53 GMT
	I0624 05:29:53.551868    6684 round_trippers.go:580]     Audit-Id: b40adf03-8725-4c9e-a98b-89ac5a4d6706
	I0624 05:29:53.551868    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:53.551868    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:53.551868    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:53.551868    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:53.552498    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:54.038195    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:54.038195    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:54.038195    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:54.038195    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:54.041754    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:54.041754    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:54.041754    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:54.041754    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:54 GMT
	I0624 05:29:54.041754    6684 round_trippers.go:580]     Audit-Id: b3238158-1eb6-494d-acdd-a42dd9be0c11
	I0624 05:29:54.041754    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:54.041754    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:54.041754    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:54.041754    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:54.545022    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:54.545022    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:54.545022    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:54.545149    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:54.551211    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:29:54.552235    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:54.552235    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:54.552235    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:54.552235    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:54.552235    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:54.552235    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:54 GMT
	I0624 05:29:54.552235    6684 round_trippers.go:580]     Audit-Id: 53203ce9-1e16-4961-babd-6113485fada6
	I0624 05:29:54.552235    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:54.552235    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:55.036838    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:55.036904    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:55.036904    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:55.036904    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:55.040220    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:55.040220    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:55.041201    6684 round_trippers.go:580]     Audit-Id: 2a42487d-06b8-4eb2-9e89-333926ea07bf
	I0624 05:29:55.041201    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:55.041201    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:55.041201    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:55.041245    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:55.041245    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:55 GMT
	I0624 05:29:55.041398    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:55.546895    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:55.546895    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:55.546895    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:55.546895    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:55.550659    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:55.550659    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:55.550659    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:55.551247    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:55.551247    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:55 GMT
	I0624 05:29:55.551247    6684 round_trippers.go:580]     Audit-Id: 0c77cf33-f1cd-4a3b-ab63-abf2f5f2b2c9
	I0624 05:29:55.551282    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:55.551282    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:55.551420    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:56.035830    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:56.035920    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:56.035920    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:56.035920    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:56.039429    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:56.039429    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:56.039429    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:56.039943    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:56.039943    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:56 GMT
	I0624 05:29:56.039943    6684 round_trippers.go:580]     Audit-Id: 4e35ff0c-e869-47ff-8494-4404bfe3b728
	I0624 05:29:56.039943    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:56.039943    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:56.040027    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:56.543321    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:56.543378    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:56.543378    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:56.543449    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:56.546812    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:56.546812    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:56.546812    6684 round_trippers.go:580]     Audit-Id: c3ea09ad-a16a-400e-a672-a7605827f1c6
	I0624 05:29:56.546812    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:56.546812    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:56.546812    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:56.546812    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:56.546812    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:56 GMT
	I0624 05:29:56.547446    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:57.035078    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:57.035367    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:57.035438    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:57.035438    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:57.042881    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:29:57.042881    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:57.042881    6684 round_trippers.go:580]     Audit-Id: c41ab6c4-a706-409b-bed9-e0268c359f44
	I0624 05:29:57.042881    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:57.042881    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:57.042881    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:57.042881    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:57.042881    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:57 GMT
	I0624 05:29:57.042881    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:57.043624    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:57.534996    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:57.535115    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:57.535115    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:57.535115    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:57.539784    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:57.539784    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:57.539857    6684 round_trippers.go:580]     Audit-Id: 41fb2e72-7f36-45cc-8223-adfe5d68c90e
	I0624 05:29:57.539857    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:57.539857    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:57.539857    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:57.539857    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:57.539857    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:57 GMT
	I0624 05:29:57.540154    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:58.036341    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:58.036341    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:58.036341    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:58.036341    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:58.041012    6684 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:29:58.041098    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:58.041098    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:58.041098    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:58 GMT
	I0624 05:29:58.041098    6684 round_trippers.go:580]     Audit-Id: cdec316e-d0c3-45f1-a9c0-b27f4491148c
	I0624 05:29:58.041098    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:58.041098    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:58.041098    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:58.041098    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:58.541086    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:58.541086    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:58.541086    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:58.541086    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:58.544687    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:29:58.544687    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:58.545020    6684 round_trippers.go:580]     Audit-Id: 8f42bcbf-1a43-41b6-897c-45fd758f49dd
	I0624 05:29:58.545020    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:58.545020    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:58.545020    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:58.545020    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:58.545020    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:58 GMT
	I0624 05:29:58.545111    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:59.043404    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:59.043404    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:59.043404    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:59.043749    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:59.050176    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:29:59.050176    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:59.050176    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:59.050176    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:59 GMT
	I0624 05:29:59.050176    6684 round_trippers.go:580]     Audit-Id: 1b158878-2e33-485a-85d1-55989e3d6dee
	I0624 05:29:59.050176    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:59.050176    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:59.050176    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:59.051510    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:29:59.051960    6684 node_ready.go:53] node "multinode-876600-m02" has status "Ready":"False"
	I0624 05:29:59.532630    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:29:59.532709    6684 round_trippers.go:469] Request Headers:
	I0624 05:29:59.532709    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:29:59.532709    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:29:59.533043    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:29:59.536610    6684 round_trippers.go:577] Response Headers:
	I0624 05:29:59.536610    6684 round_trippers.go:580]     Audit-Id: fdc373e2-48d6-463b-b6cf-3a8d23b86e86
	I0624 05:29:59.536610    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:29:59.536610    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:29:59.536610    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:29:59.536610    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:29:59.536610    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:29:59 GMT
	I0624 05:29:59.536853    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:30:00.050273    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:30:00.050273    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:00.050273    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:00.050551    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:00.057881    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:30:00.057881    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:00.057881    6684 round_trippers.go:580]     Audit-Id: 757f340f-f651-4168-9800-3d575da167d0
	I0624 05:30:00.057881    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:00.057881    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:00.057881    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:00.057881    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:00.057881    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:00 GMT
	I0624 05:30:00.058386    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:30:00.532182    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:30:00.532182    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:00.532269    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:00.532269    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:00.532631    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:00.536543    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:00.536543    6684 round_trippers.go:580]     Audit-Id: 8b116763-6f47-458f-8489-204aa00698b6
	I0624 05:30:00.536543    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:00.536543    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:00.536543    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:00.536543    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:00.536543    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:00 GMT
	I0624 05:30:00.536799    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"603","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0624 05:30:01.038525    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:30:01.038525    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.038794    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.038794    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.039179    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.042891    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.042891    6684 round_trippers.go:580]     Audit-Id: 74b5969a-7254-4c4c-99fe-3acc4c5bf1d8
	I0624 05:30:01.042891    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.042891    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.042891    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.042891    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.043137    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.043367    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"624","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I0624 05:30:01.043900    6684 node_ready.go:49] node "multinode-876600-m02" has status "Ready":"True"
	I0624 05:30:01.044122    6684 node_ready.go:38] duration metric: took 18.0131605s for node "multinode-876600-m02" to be "Ready" ...
	I0624 05:30:01.044212    6684 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:30:01.044373    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods
	I0624 05:30:01.044373    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.044373    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.044373    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.052234    6684 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:30:01.052234    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.052234    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.052234    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.052234    6684 round_trippers.go:580]     Audit-Id: 73b65dfc-99be-41d4-95dc-397552c86540
	I0624 05:30:01.052234    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.052234    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.052234    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.053735    6684 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"624"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"420","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0624 05:30:01.057444    6684 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.057557    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:30:01.057557    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.057557    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.057557    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.060800    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:30:01.060800    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.060800    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.060800    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.060800    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.060902    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.060902    6684 round_trippers.go:580]     Audit-Id: 83581a7e-9faa-4be7-a84e-2952bc624a2c
	I0624 05:30:01.060902    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.061111    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"420","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0624 05:30:01.061783    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.061783    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.061783    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.061783    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.063397    6684 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:30:01.065309    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.065309    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.065370    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.065370    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.065370    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.065370    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.065370    6684 round_trippers.go:580]     Audit-Id: e3a32170-58ef-4e0e-af47-d1ec8f5dcb67
	I0624 05:30:01.065781    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:01.066321    6684 pod_ready.go:92] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.066358    6684 pod_ready.go:81] duration metric: took 8.8629ms for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.066392    6684 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.066501    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:30:01.066535    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.066535    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.066535    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.067277    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.067277    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.069817    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.069817    6684 round_trippers.go:580]     Audit-Id: 621bbb55-82bc-43ce-aa87-407a918d3ec4
	I0624 05:30:01.069817    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.069817    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.069817    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.069817    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.070625    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"4906666c-eed2-4f7c-a011-5a9b589fdcdc","resourceVersion":"386","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.211.219:2379","kubernetes.io/config.hash":"1e708d5cd73627b4d4daa56de34a8c4e","kubernetes.io/config.mirror":"1e708d5cd73627b4d4daa56de34a8c4e","kubernetes.io/config.seen":"2024-06-24T12:26:27.293357655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0624 05:30:01.071353    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.071390    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.071390    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.071424    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.073835    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:30:01.074136    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.074136    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.074136    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.074136    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.074326    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.074444    6684 round_trippers.go:580]     Audit-Id: 893c4236-fd19-43ae-8906-4ed0e4b3d5a9
	I0624 05:30:01.074444    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.074706    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:01.075168    6684 pod_ready.go:92] pod "etcd-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.075168    6684 pod_ready.go:81] duration metric: took 8.7757ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.075168    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.075168    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:30:01.075168    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.075168    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.075168    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.081557    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:30:01.081557    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.081557    6684 round_trippers.go:580]     Audit-Id: 9bd6ff98-7c74-42ae-9988-6b742417473e
	I0624 05:30:01.081557    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.081557    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.081557    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.081557    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.081557    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.082280    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a7f191-9dd7-4dcd-8e9e-d05deeac2349","resourceVersion":"384","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.211.219:8443","kubernetes.io/config.hash":"f659c666f2215840bd65758467c8d95f","kubernetes.io/config.mirror":"f659c666f2215840bd65758467c8d95f","kubernetes.io/config.seen":"2024-06-24T12:26:27.293359155Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0624 05:30:01.082830    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.082830    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.083022    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.083050    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.085651    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:30:01.085651    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.085651    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.085833    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.085833    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.085833    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.085833    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.085833    6684 round_trippers.go:580]     Audit-Id: a854394a-d563-42aa-a18b-c49fefba73f2
	I0624 05:30:01.086052    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:01.086537    6684 pod_ready.go:92] pod "kube-apiserver-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.086573    6684 pod_ready.go:81] duration metric: took 11.4052ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.086573    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.086732    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:30:01.086769    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.086769    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.086804    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.088760    6684 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:30:01.088760    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.088760    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.088760    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.089928    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.089928    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.089928    6684 round_trippers.go:580]     Audit-Id: 9b23e0bb-de09-4b7b-99ce-74314c7751bb
	I0624 05:30:01.089928    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.090007    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"383","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0624 05:30:01.090878    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.090952    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.090952    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.091003    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.091120    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.091120    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.091120    6684 round_trippers.go:580]     Audit-Id: 84ec0475-98a7-4c70-a3dd-9aa06605aec4
	I0624 05:30:01.091120    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.091120    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.091120    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.091120    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.091120    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.091120    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:01.091120    6684 pod_ready.go:92] pod "kube-controller-manager-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.091120    6684 pod_ready.go:81] duration metric: took 4.4974ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.091120    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.253300    6684 request.go:629] Waited for 162.1789ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:30:01.253575    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:30:01.253575    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.253575    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.253575    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.254258    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.254258    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.254258    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.254258    6684 round_trippers.go:580]     Audit-Id: 53e29c4b-a10f-48cf-9ef1-44dc12097496
	I0624 05:30:01.258513    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.258513    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.258513    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.258513    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.258848    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hjjs8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e148504-3300-4591-9576-7c5597851f41","resourceVersion":"609","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0624 05:30:01.442118    6684 request.go:629] Waited for 182.2102ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:30:01.442379    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:30:01.442379    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.442379    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.442455    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.442814    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.442814    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.447040    6684 round_trippers.go:580]     Audit-Id: 9e52f5aa-5cf5-45c5-846f-f96d310f2bcc
	I0624 05:30:01.447040    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.447096    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.447096    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.447153    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.447153    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.447443    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"624","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I0624 05:30:01.448036    6684 pod_ready.go:92] pod "kube-proxy-hjjs8" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.448145    6684 pod_ready.go:81] duration metric: took 357.0233ms for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.448145    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.653977    6684 request.go:629] Waited for 205.193ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:30:01.654090    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:30:01.654090    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.654090    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.654090    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.658032    6684 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:30:01.658032    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.659553    6684 round_trippers.go:580]     Audit-Id: 72373676-7a52-49c8-b1bf-a34e110fac2e
	I0624 05:30:01.659553    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.659553    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.659553    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.659553    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.659553    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.661399    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"378","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0624 05:30:01.851889    6684 request.go:629] Waited for 190.1819ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.852009    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:01.852009    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:01.852132    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:01.852132    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:01.852425    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:01.852425    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:01.852425    6684 round_trippers.go:580]     Audit-Id: 3b7fdf27-df9c-4f1f-ab31-e57ac5144fb2
	I0624 05:30:01.852425    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:01.852425    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:01.852425    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:01.852425    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:01.856022    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:01 GMT
	I0624 05:30:01.856299    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:01.856913    6684 pod_ready.go:92] pod "kube-proxy-lcc9v" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:01.856913    6684 pod_ready.go:81] duration metric: took 408.7664ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:01.857132    6684 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:02.052505    6684 request.go:629] Waited for 195.1843ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:30:02.052505    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:30:02.052505    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:02.052505    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:02.052505    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:02.053162    6684 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:30:02.057738    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:02.057738    6684 round_trippers.go:580]     Audit-Id: c81f1a2a-1298-43ec-a0e1-2a086e091b0f
	I0624 05:30:02.057738    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:02.057738    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:02.057738    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:02.057738    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:02.057738    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:02 GMT
	I0624 05:30:02.057927    6684 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"385","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0624 05:30:02.249423    6684 request.go:629] Waited for 190.4251ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:02.249678    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes/multinode-876600
	I0624 05:30:02.249678    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:02.249678    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:02.249678    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:02.256134    6684 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:30:02.256134    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:02.256134    6684 round_trippers.go:580]     Audit-Id: db1a6ff8-9691-4858-9877-d52da6680f28
	I0624 05:30:02.256134    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:02.256134    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:02.256134    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:02.256134    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:02.256134    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:02 GMT
	I0624 05:30:02.256930    6684 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0624 05:30:02.256930    6684 pod_ready.go:92] pod "kube-scheduler-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:30:02.257473    6684 pod_ready.go:81] duration metric: took 400.3395ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:30:02.257473    6684 pod_ready.go:38] duration metric: took 1.2131471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:30:02.257473    6684 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 05:30:02.269724    6684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:30:02.296142    6684 system_svc.go:56] duration metric: took 38.6695ms WaitForService to wait for kubelet
	I0624 05:30:02.296215    6684 kubeadm.go:576] duration metric: took 19.5494056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:30:02.296268    6684 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:30:02.442510    6684 request.go:629] Waited for 146.1108ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.211.219:8443/api/v1/nodes
	I0624 05:30:02.442749    6684 round_trippers.go:463] GET https://172.31.211.219:8443/api/v1/nodes
	I0624 05:30:02.442749    6684 round_trippers.go:469] Request Headers:
	I0624 05:30:02.442749    6684 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:30:02.442749    6684 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:30:02.445691    6684 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:30:02.445691    6684 round_trippers.go:577] Response Headers:
	I0624 05:30:02.445691    6684 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:30:02.445691    6684 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:30:02 GMT
	I0624 05:30:02.445691    6684 round_trippers.go:580]     Audit-Id: c29c89c7-05c2-4b69-a494-746673adeab5
	I0624 05:30:02.445691    6684 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:30:02.447190    6684 round_trippers.go:580]     Content-Type: application/json
	I0624 05:30:02.447190    6684 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:30:02.447619    6684 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"430","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9149 chars]
	I0624 05:30:02.448312    6684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:30:02.448312    6684 node_conditions.go:123] node cpu capacity is 2
	I0624 05:30:02.448312    6684 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:30:02.448312    6684 node_conditions.go:123] node cpu capacity is 2
	I0624 05:30:02.448312    6684 node_conditions.go:105] duration metric: took 152.044ms to run NodePressure ...
	I0624 05:30:02.448312    6684 start.go:240] waiting for startup goroutines ...
	I0624 05:30:02.448519    6684 start.go:254] writing updated cluster config ...
	I0624 05:30:02.463009    6684 ssh_runner.go:195] Run: rm -f paused
	I0624 05:30:02.619623    6684 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0624 05:30:02.622969    6684 out.go:177] * Done! kubectl is now configured to use "multinode-876600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.390450231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:52 multinode-876600 cri-dockerd[1223]: time="2024-06-24T12:26:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.773988077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.774585381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.774700082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.775657189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.898042965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.898230866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.898273567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:52 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:52.898408968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:52 multinode-876600 cri-dockerd[1223]: time="2024-06-24T12:26:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 12:26:53 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:53.130655139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:26:53 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:53.131086024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:26:53 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:53.131186620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:26:53 multinode-876600 dockerd[1321]: time="2024-06-24T12:26:53.131380413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:30:27 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:27.193194486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:30:27 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:27.194650289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:30:27 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:27.194758089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:30:27 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:27.195110290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:30:27 multinode-876600 cri-dockerd[1223]: time="2024-06-24T12:30:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 24 12:30:28 multinode-876600 cri-dockerd[1223]: time="2024-06-24T12:30:28Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 24 12:30:28 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:28.896396371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:30:28 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:28.896532873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:30:28 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:28.896554173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:30:28 multinode-876600 dockerd[1321]: time="2024-06-24T12:30:28.896667675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	83a09faf1e2d5       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   caf1b076e912f       storage-provisioner
	f46bdc12472e4       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	b0dd966ee710f       53c535741fb44                                                                                         4 minutes ago       Running             kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	7174bdea66e24       e874818b3caac                                                                                         4 minutes ago       Running             kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	d7d8d18e1b115       7820c83aa1394                                                                                         4 minutes ago       Running             kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	d781e9872808b       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   6d1c3ec125c93       etcd-multinode-876600
	eefbf63a6c05b       56ce0fd9fb532                                                                                         4 minutes ago       Running             kube-apiserver            0                   5f89e0f2608fe       kube-apiserver-multinode-876600
	
	
	==> coredns [f46bdc12472e] <==
	[INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	[INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	[INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	[INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	[INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	[INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	[INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	[INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	[INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	[INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	[INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	[INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	[INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	[INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	[INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	[INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	[INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	[INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	[INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	[INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	[INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	[INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	[INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	[INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	[INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	
	
	==> describe nodes <==
	Name:               multinode-876600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-876600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=multinode-876600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-876600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 12:31:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 12:31:04 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 12:31:04 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 12:31:04 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 12:31:04 +0000   Mon, 24 Jun 2024 12:26:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.211.219
	  Hostname:    multinode-876600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a304b550bcbe4ad28c23f1f143bd1ee1
	  System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	  Boot ID:                    7c8d12d9-2b87-4ba3-b407-7b680fbe289e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ddhfw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m35s
	  kube-system                 etcd-multinode-876600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-x7zb4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m35s
	  kube-system                 kube-apiserver-multinode-876600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-multinode-876600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-lcc9v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-multinode-876600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m34s  kube-proxy       
	  Normal  Starting                 4m49s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m36s  node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	  Normal  NodeReady                4m25s  kubelet          Node multinode-876600 status is now: NodeReady
	
	
	Name:               multinode-876600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-876600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=multinode-876600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-876600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 12:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 12:30:43 +0000   Mon, 24 Jun 2024 12:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 12:30:43 +0000   Mon, 24 Jun 2024 12:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 12:30:43 +0000   Mon, 24 Jun 2024 12:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 12:30:43 +0000   Mon, 24 Jun 2024 12:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.221.199
	  Hostname:    multinode-876600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	  System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	  Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vqhsz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-t9wzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      95s
	  kube-system                 kube-proxy-hjjs8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x2 over 95s)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x2 over 95s)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x2 over 95s)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	  Normal  NodeReady                76s                kubelet          Node multinode-876600-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.141490] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun24 12:25] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.178428] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +31.727194] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.099977] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.506495] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.202703] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.217259] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +2.776708] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +0.199294] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.214536] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.299237] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[Jun24 12:26] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.109534] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.000172] systemd-fstab-generator[1505]: Ignoring "noauto" option for root device
	[  +6.595829] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.106231] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.049185] systemd-fstab-generator[2113]: Ignoring "noauto" option for root device
	[  +0.145169] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.065350] systemd-fstab-generator[2304]: Ignoring "noauto" option for root device
	[  +0.241055] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.713565] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.018374] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [d781e9872808] <==
	{"level":"info","ts":"2024-06-24T12:26:21.646934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-24T12:26:21.646965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 1"}
	{"level":"info","ts":"2024-06-24T12:26:21.64698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 2"}
	{"level":"info","ts":"2024-06-24T12:26:21.646987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 2"}
	{"level":"info","ts":"2024-06-24T12:26:21.646999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 2"}
	{"level":"info","ts":"2024-06-24T12:26:21.647007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 2"}
	{"level":"info","ts":"2024-06-24T12:26:21.657297Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:26:21.66404Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.211.219:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-24T12:26:21.664097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T12:26:21.667092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T12:26:21.684069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.211.219:2379"}
	{"level":"info","ts":"2024-06-24T12:26:21.690992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:26:21.693367Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:26:21.693473Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:26:21.691839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-24T12:26:21.693494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-24T12:26:21.692852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-24T12:27:05.584999Z","caller":"traceutil/trace.go:171","msg":"trace[181024742] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"175.259072ms","start":"2024-06-24T12:27:05.409703Z","end":"2024-06-24T12:27:05.584962Z","steps":["trace[181024742] 'process raft request'  (duration: 175.043378ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T12:27:08.242422Z","caller":"traceutil/trace.go:171","msg":"trace[1784292032] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"111.018939ms","start":"2024-06-24T12:27:08.131384Z","end":"2024-06-24T12:27:08.242403Z","steps":["trace[1784292032] 'process raft request'  (duration: 110.856144ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T12:29:35.077269Z","caller":"traceutil/trace.go:171","msg":"trace[1414458478] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"125.05626ms","start":"2024-06-24T12:29:34.952194Z","end":"2024-06-24T12:29:35.07725Z","steps":["trace[1414458478] 'process raft request'  (duration: 124.77656ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-24T12:29:52.133615Z","caller":"traceutil/trace.go:171","msg":"trace[1400167287] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"208.888348ms","start":"2024-06-24T12:29:51.924707Z","end":"2024-06-24T12:29:52.133595Z","steps":["trace[1400167287] 'process raft request'  (duration: 195.524932ms)","trace[1400167287] 'compare'  (duration: 12.863216ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-24T12:29:52.135508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.52702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-24T12:29:52.135922Z","caller":"traceutil/trace.go:171","msg":"trace[647794708] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:604; }","duration":"100.978922ms","start":"2024-06-24T12:29:52.034931Z","end":"2024-06-24T12:29:52.13591Z","steps":["trace[647794708] 'agreement among raft nodes before linearized reading'  (duration: 100.531221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-24T12:29:52.469922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.952254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-24T12:29:52.47004Z","caller":"traceutil/trace.go:171","msg":"trace[1006271252] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:604; }","duration":"128.109354ms","start":"2024-06-24T12:29:52.341915Z","end":"2024-06-24T12:29:52.470025Z","steps":["trace[1006271252] 'range keys from in-memory index tree'  (duration: 127.888454ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:31:16 up 7 min,  0 users,  load average: 0.09, 0.29, 0.17
	Linux multinode-876600 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f74eb1beb274] <==
	I0624 12:30:10.110773       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:30:20.120136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:30:20.120254       1 main.go:227] handling current node
	I0624 12:30:20.120269       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:30:20.120277       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:30:30.126739       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:30:30.126898       1 main.go:227] handling current node
	I0624 12:30:30.126921       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:30:30.126946       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:30:40.133121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:30:40.133222       1 main.go:227] handling current node
	I0624 12:30:40.133235       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:30:40.133242       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:30:50.140296       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:30:50.140451       1 main.go:227] handling current node
	I0624 12:30:50.140466       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:30:50.140526       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:31:00.155322       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:31:00.155349       1 main.go:227] handling current node
	I0624 12:31:00.155371       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:31:00.155553       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:31:10.161766       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:31:10.162129       1 main.go:227] handling current node
	I0624 12:31:10.162528       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:31:10.162606       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [eefbf63a6c05] <==
	I0624 12:26:24.602109       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0624 12:26:24.610432       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0624 12:26:24.610595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 12:26:25.818982       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 12:26:25.895656       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 12:26:26.021675       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0624 12:26:26.040392       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219]
	I0624 12:26:26.041851       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 12:26:26.049625       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 12:26:26.701385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 12:26:27.269341       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 12:26:27.297323       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0624 12:26:27.357130       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 12:26:40.563111       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0624 12:26:41.122773       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0624 12:30:32.586170       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63815: use of closed network connection
	E0624 12:30:33.027450       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63818: use of closed network connection
	E0624 12:30:33.516467       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63820: use of closed network connection
	E0624 12:30:33.959042       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63822: use of closed network connection
	E0624 12:30:34.391138       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63824: use of closed network connection
	E0624 12:30:34.836498       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63826: use of closed network connection
	E0624 12:30:35.616023       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63829: use of closed network connection
	E0624 12:30:46.052182       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63831: use of closed network connection
	E0624 12:30:46.460395       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63834: use of closed network connection
	E0624 12:30:56.863561       1 conn.go:339] Error on socket receive: read tcp 172.31.211.219:8443->172.31.208.1:63837: use of closed network connection
	
	
	==> kube-controller-manager [7174bdea66e2] <==
	I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	
	
	==> kube-proxy [b0dd966ee710] <==
	I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7d8d18e1b11] <==
	W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 24 12:26:54 multinode-876600 kubelet[2120]: I0624 12:26:54.049580    2120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.049563983 podStartE2EDuration="5.049563983s" podCreationTimestamp="2024-06-24 12:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:26:54.049391289 +0000 UTC m=+26.902351978" watchObservedRunningTime="2024-06-24 12:26:54.049563983 +0000 UTC m=+26.902524672"
	Jun 24 12:27:27 multinode-876600 kubelet[2120]: E0624 12:27:27.414407    2120 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:27:27 multinode-876600 kubelet[2120]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:27:27 multinode-876600 kubelet[2120]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:27:27 multinode-876600 kubelet[2120]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:27:27 multinode-876600 kubelet[2120]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:28:27 multinode-876600 kubelet[2120]: E0624 12:28:27.410933    2120 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:28:27 multinode-876600 kubelet[2120]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:28:27 multinode-876600 kubelet[2120]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:28:27 multinode-876600 kubelet[2120]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:28:27 multinode-876600 kubelet[2120]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:29:27 multinode-876600 kubelet[2120]: E0624 12:29:27.407158    2120 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:29:27 multinode-876600 kubelet[2120]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:29:27 multinode-876600 kubelet[2120]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:29:27 multinode-876600 kubelet[2120]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:29:27 multinode-876600 kubelet[2120]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:30:26 multinode-876600 kubelet[2120]: I0624 12:30:26.668619    2120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podStartSLOduration=225.668599356 podStartE2EDuration="3m45.668599356s" podCreationTimestamp="2024-06-24 12:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:26:54.07870769 +0000 UTC m=+26.931668479" watchObservedRunningTime="2024-06-24 12:30:26.668599356 +0000 UTC m=+239.521560045"
	Jun 24 12:30:26 multinode-876600 kubelet[2120]: I0624 12:30:26.670221    2120 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	Jun 24 12:30:26 multinode-876600 kubelet[2120]: I0624 12:30:26.829037    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6r6\" (UniqueName: \"kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6\") pod \"busybox-fc5497c4f-ddhfw\" (UID: \"bdf96c8c-7151-4ac5-9548-ee114ce02793\") " pod="default/busybox-fc5497c4f-ddhfw"
	Jun 24 12:30:27 multinode-876600 kubelet[2120]: E0624 12:30:27.414587    2120 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:30:27 multinode-876600 kubelet[2120]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:30:27 multinode-876600 kubelet[2120]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:30:27 multinode-876600 kubelet[2120]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:30:27 multinode-876600 kubelet[2120]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:30:32 multinode-876600 kubelet[2120]: E0624 12:30:32.587641    2120 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46832->127.0.0.1:37947: write tcp 127.0.0.1:46832->127.0.0.1:37947: write: broken pipe
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:31:08.967418    9160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-876600 -n multinode-876600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-876600 -n multinode-876600: (11.8902452s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-876600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (492.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-876600
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-876600
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-876600: (1m37.8149899s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-876600 --wait=true -v=8 --alsologtostderr
E0624 05:48:21.882340     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 05:51:25.134134     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 05:53:21.870314     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-876600 --wait=true -v=8 --alsologtostderr: exit status 1 (5m43.3464314s)

                                                
                                                
-- stdout --
	* [multinode-876600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-876600" primary control-plane node in "multinode-876600" cluster
	* Restarting existing hyperv VM for "multinode-876600" ...
	* Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-876600-m02" worker node in "multinode-876600" cluster
	* Restarting existing hyperv VM for "multinode-876600-m02" ...
	* Found network options:
	  - NO_PROXY=172.31.217.139
	  - NO_PROXY=172.31.217.139
	* Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	  - env NO_PROXY=172.31.217.139

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:47:35.874317   14012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0624 05:47:35.880785   14012 out.go:291] Setting OutFile to fd 912 ...
	I0624 05:47:35.881481   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:47:35.881481   14012 out.go:304] Setting ErrFile to fd 500...
	I0624 05:47:35.881481   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:47:35.902984   14012 out.go:298] Setting JSON to false
	I0624 05:47:35.908378   14012 start.go:129] hostinfo: {"hostname":"minikube1","uptime":23711,"bootTime":1719209544,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 05:47:35.908378   14012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 05:47:35.977561   14012 out.go:177] * [multinode-876600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 05:47:36.093958   14012 notify.go:220] Checking for updates...
	I0624 05:47:36.136283   14012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:47:36.221393   14012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 05:47:36.229694   14012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 05:47:36.266219   14012 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 05:47:36.282155   14012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 05:47:36.287557   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:47:36.289122   14012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 05:47:41.683761   14012 out.go:177] * Using the hyperv driver based on existing profile
	I0624 05:47:41.716198   14012 start.go:297] selected driver: hyperv
	I0624 05:47:41.716198   14012 start.go:901] validating driver "hyperv" against &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:47:41.724181   14012 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 05:47:41.777496   14012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:47:41.777496   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:47:41.777496   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:47:41.777786   14012 start.go:340] cluster config:
	{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:47:41.778085   14012 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 05:47:41.822930   14012 out.go:177] * Starting "multinode-876600" primary control-plane node in "multinode-876600" cluster
	I0624 05:47:41.834828   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:47:41.835034   14012 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 05:47:41.835150   14012 cache.go:56] Caching tarball of preloaded images
	I0624 05:47:41.835578   14012 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:47:41.835832   14012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:47:41.836267   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:47:41.839458   14012 start.go:360] acquireMachinesLock for multinode-876600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:47:41.840016   14012 start.go:364] duration metric: took 558.9µs to acquireMachinesLock for "multinode-876600"
	I0624 05:47:41.840386   14012 start.go:96] Skipping create...Using existing machine configuration
	I0624 05:47:41.840441   14012 fix.go:54] fixHost starting: 
	I0624 05:47:41.841211   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:44.499606   14012 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 05:47:44.510968   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:44.510968   14012 fix.go:112] recreateIfNeeded on multinode-876600: state=Stopped err=<nil>
	W0624 05:47:44.510968   14012 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 05:47:44.521768   14012 out.go:177] * Restarting existing hyperv VM for "multinode-876600" ...
	I0624 05:47:44.560050   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600
	I0624 05:47:47.561367   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:47.561464   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:47.561464   14012 main.go:141] libmachine: Waiting for host to start...
	I0624 05:47:47.561543   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:49.748922   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:47:49.761167   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:49.761250   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:47:52.160828   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:52.160828   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:53.172014   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:55.333472   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:47:55.344925   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:55.344925   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:47:57.777506   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:57.777506   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:58.778248   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:00.944614   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:00.953095   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:00.953254   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:03.399362   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:48:03.399362   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:04.403630   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:06.562200   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:06.562635   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:06.562741   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:09.027310   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:48:09.027310   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:10.041460   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:12.234249   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:12.234249   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:12.241903   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:14.762054   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:14.762054   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:14.773865   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:19.311624   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:19.311821   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:19.312076   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:48:19.315026   14012 machine.go:94] provisionDockerMachine start ...
	I0624 05:48:19.315109   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:21.367280   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:21.367280   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:21.377733   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:23.795383   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:23.795383   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:23.812454   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:23.813213   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:23.813213   14012 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:48:23.941448   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:48:23.941555   14012 buildroot.go:166] provisioning hostname "multinode-876600"
	I0624 05:48:23.941637   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:26.031170   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:26.031170   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:26.043047   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:28.498014   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:28.498014   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:28.514891   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:28.515507   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:28.515507   14012 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600 && echo "multinode-876600" | sudo tee /etc/hostname
	I0624 05:48:28.665093   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600
	
	I0624 05:48:28.665218   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:30.705686   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:30.717217   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:30.717403   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:33.205040   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:33.205040   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:33.222256   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:33.222256   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:33.222903   14012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:48:33.360338   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:48:33.360455   14012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:48:33.360605   14012 buildroot.go:174] setting up certificates
	I0624 05:48:33.360605   14012 provision.go:84] configureAuth start
	I0624 05:48:33.360653   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:35.484443   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:35.484443   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:35.484651   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:37.913308   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:37.913308   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:37.924422   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:39.990065   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:39.990065   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:40.000535   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:42.412433   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:42.412433   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:42.423902   14012 provision.go:143] copyHostCerts
	I0624 05:48:42.424151   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:48:42.424478   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:48:42.424478   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:48:42.424728   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:48:42.426192   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:48:42.426547   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:48:42.426547   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:48:42.426547   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:48:42.428071   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:48:42.428368   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:48:42.428368   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:48:42.428767   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:48:42.429871   14012 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600 san=[127.0.0.1 172.31.217.139 localhost minikube multinode-876600]
	I0624 05:48:42.579627   14012 provision.go:177] copyRemoteCerts
	I0624 05:48:42.590215   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:48:42.590215   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:44.603809   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:44.603809   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:44.614278   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:47.051839   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:47.051839   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:47.063884   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:48:47.169335   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5791028s)
	I0624 05:48:47.169424   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:48:47.169954   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:48:47.215116   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:48:47.215637   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0624 05:48:47.260635   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:48:47.261202   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 05:48:47.306709   14012 provision.go:87] duration metric: took 13.9459462s to configureAuth
	I0624 05:48:47.306769   14012 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:48:47.307934   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:48:47.308157   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:49.355089   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:49.366289   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:49.366415   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:51.851609   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:51.851609   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:51.857185   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:51.857941   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:51.857941   14012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:48:51.983187   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:48:51.983187   14012 buildroot.go:70] root file system type: tmpfs
	I0624 05:48:51.983661   14012 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:48:51.983803   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:54.063579   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:54.063579   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:54.074296   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:56.495607   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:56.495607   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:56.517067   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:56.517299   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:56.517299   14012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:48:56.679303   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:48:56.679440   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:58.750573   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:58.750573   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:58.762462   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:01.332721   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:01.343878   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:01.351330   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:01.351330   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:01.351330   14012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:49:03.817641   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:49:03.817745   14012 machine.go:97] duration metric: took 44.5025033s to provisionDockerMachine
	I0624 05:49:03.817791   14012 start.go:293] postStartSetup for "multinode-876600" (driver="hyperv")
	I0624 05:49:03.817791   14012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:49:03.828976   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:49:03.828976   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:05.917203   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:05.928220   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:05.928404   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:08.384574   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:08.384574   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:08.385107   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:08.487134   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6581409s)
	I0624 05:49:08.505521   14012 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:49:08.517083   14012 command_runner.go:130] > NAME=Buildroot
	I0624 05:49:08.517188   14012 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:49:08.517188   14012 command_runner.go:130] > ID=buildroot
	I0624 05:49:08.517188   14012 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:49:08.517188   14012 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:49:08.517188   14012 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:49:08.517319   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:49:08.517791   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:49:08.519070   14012 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:49:08.519070   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:49:08.530635   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:49:08.550071   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:49:08.595311   14012 start.go:296] duration metric: took 4.7775028s for postStartSetup
	I0624 05:49:08.595509   14012 fix.go:56] duration metric: took 1m26.7547463s for fixHost
	I0624 05:49:08.595663   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:10.624723   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:10.624723   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:10.624866   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:13.078139   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:13.091367   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:13.097690   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:13.098290   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:13.098290   14012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0624 05:49:13.219657   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719233353.215073916
	
	I0624 05:49:13.219657   14012 fix.go:216] guest clock: 1719233353.215073916
	I0624 05:49:13.219754   14012 fix.go:229] Guest: 2024-06-24 05:49:13.215073916 -0700 PDT Remote: 2024-06-24 05:49:08.5955439 -0700 PDT m=+92.801165501 (delta=4.619530016s)
	I0624 05:49:13.219836   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:15.286491   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:15.286491   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:15.286740   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:17.715232   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:17.719070   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:17.725756   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:17.726686   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:17.726686   14012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719233353
	I0624 05:49:17.859280   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:49:13 UTC 2024
	
	I0624 05:49:17.859350   14012 fix.go:236] clock set: Mon Jun 24 12:49:13 UTC 2024
	 (err=<nil>)
	I0624 05:49:17.859385   14012 start.go:83] releasing machines lock for "multinode-876600", held for 1m36.0190136s
	I0624 05:49:17.859559   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:19.941531   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:19.953082   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:19.953152   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:22.374617   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:22.374617   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:22.391320   14012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:49:22.391448   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:22.403553   14012 ssh_runner.go:195] Run: cat /version.json
	I0624 05:49:22.403553   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:24.533805   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:24.533924   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:24.534028   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:27.126497   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:27.126497   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:27.138292   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:27.157547   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:27.157547   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:27.162012   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:27.235769   14012 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 05:49:27.235929   14012 ssh_runner.go:235] Completed: cat /version.json: (4.8321981s)
	I0624 05:49:27.248698   14012 ssh_runner.go:195] Run: systemctl --version
	I0624 05:49:27.307856   14012 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:49:27.307946   14012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9166072s)
	I0624 05:49:27.308036   14012 command_runner.go:130] > systemd 252 (252)
	I0624 05:49:27.308077   14012 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 05:49:27.319284   14012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:49:27.322999   14012 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 05:49:27.328935   14012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:49:27.339751   14012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:49:27.365596   14012 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:49:27.367582   14012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:49:27.367582   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:49:27.367840   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:49:27.398576   14012 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:49:27.413526   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:49:27.448573   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:49:27.469595   14012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:49:27.483173   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:49:27.516238   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:49:27.544259   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:49:27.573981   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:49:27.606795   14012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:49:27.637009   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:49:27.667351   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:49:27.698788   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:49:27.730030   14012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:49:27.746470   14012 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:49:27.759990   14012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:49:27.787789   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:27.978133   14012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:49:28.006804   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:49:28.022893   14012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:49:28.044875   14012 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:49:28.044875   14012 command_runner.go:130] > [Unit]
	I0624 05:49:28.044875   14012 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:49:28.044875   14012 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:49:28.044875   14012 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:49:28.044875   14012 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:49:28.044875   14012 command_runner.go:130] > StartLimitBurst=3
	I0624 05:49:28.044875   14012 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:49:28.044875   14012 command_runner.go:130] > [Service]
	I0624 05:49:28.045042   14012 command_runner.go:130] > Type=notify
	I0624 05:49:28.045042   14012 command_runner.go:130] > Restart=on-failure
	I0624 05:49:28.045042   14012 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:49:28.045042   14012 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:49:28.045042   14012 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:49:28.045042   14012 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:49:28.045042   14012 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:49:28.045042   14012 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:49:28.045042   14012 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:49:28.045178   14012 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:49:28.045178   14012 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecStart=
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:49:28.045178   14012 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:49:28.045178   14012 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:49:28.045178   14012 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:49:28.045313   14012 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:49:28.045313   14012 command_runner.go:130] > LimitCORE=infinity
	I0624 05:49:28.045397   14012 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:49:28.045397   14012 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:49:28.045397   14012 command_runner.go:130] > TasksMax=infinity
	I0624 05:49:28.045397   14012 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:49:28.045397   14012 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:49:28.045397   14012 command_runner.go:130] > Delegate=yes
	I0624 05:49:28.045397   14012 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:49:28.045397   14012 command_runner.go:130] > KillMode=process
	I0624 05:49:28.045499   14012 command_runner.go:130] > [Install]
	I0624 05:49:28.045499   14012 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:49:28.059973   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:49:28.091667   14012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:49:28.138019   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:49:28.175833   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:49:28.209589   14012 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:49:28.266376   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:49:28.289907   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:49:28.317969   14012 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:49:28.333318   14012 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:49:28.339785   14012 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:49:28.350418   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:49:28.370602   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:49:28.410312   14012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:49:28.602162   14012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:49:28.773723   14012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:49:28.774011   14012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:49:28.820013   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:28.989642   14012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:49:31.630268   14012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.640522s)
	I0624 05:49:31.644407   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:49:31.682245   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:49:31.717283   14012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:49:31.892114   14012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:49:32.072298   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:32.250037   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:49:32.291868   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:49:32.328978   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:32.504679   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:49:32.605839   14012 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:49:32.619028   14012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:49:32.628363   14012 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:49:32.628498   14012 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:49:32.628498   14012 command_runner.go:130] > Device: 0,22	Inode: 865         Links: 1
	I0624 05:49:32.628498   14012 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:49:32.628498   14012 command_runner.go:130] > Access: 2024-06-24 12:49:32.514780067 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] > Modify: 2024-06-24 12:49:32.514780067 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] > Change: 2024-06-24 12:49:32.518779983 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] >  Birth: -
	I0624 05:49:32.628621   14012 start.go:562] Will wait 60s for crictl version
	I0624 05:49:32.641328   14012 ssh_runner.go:195] Run: which crictl
	I0624 05:49:32.646872   14012 command_runner.go:130] > /usr/bin/crictl
	I0624 05:49:32.659436   14012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:49:32.719145   14012 command_runner.go:130] > Version:  0.1.0
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeName:  docker
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:49:32.719145   14012 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:49:32.728177   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:49:32.761002   14012 command_runner.go:130] > 26.1.4
	I0624 05:49:32.772410   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:49:32.801743   14012 command_runner.go:130] > 26.1.4
	I0624 05:49:32.805936   14012 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:49:32.805936   14012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:49:32.813498   14012 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:49:32.813498   14012 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:49:32.824921   14012 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:49:32.830964   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:49:32.850186   14012 kubeadm.go:877] updating cluster {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 05:49:32.850826   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:49:32.859963   14012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:49:32.884850   14012 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0624 05:49:32.884850   14012 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0624 05:49:32.884938   14012 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:49:32.885016   14012 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0624 05:49:32.885109   14012 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0624 05:49:32.885136   14012 docker.go:615] Images already preloaded, skipping extraction
	I0624 05:49:32.895578   14012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0624 05:49:32.926654   14012 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0624 05:49:32.926654   14012 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:49:32.926654   14012 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0624 05:49:32.926782   14012 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0624 05:49:32.926782   14012 cache_images.go:84] Images are preloaded, skipping loading
	I0624 05:49:32.926910   14012 kubeadm.go:928] updating node { 172.31.217.139 8443 v1.30.2 docker true true} ...
	I0624 05:49:32.927169   14012 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.217.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:49:32.936864   14012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 05:49:32.984193   14012 command_runner.go:130] > cgroupfs
	I0624 05:49:32.984478   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:49:32.984633   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:49:32.984670   14012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 05:49:32.984743   14012 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.217.139 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-876600 NodeName:multinode-876600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.217.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.217.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 05:49:32.984843   14012 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.217.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-876600"
	  kubeletExtraArgs:
	    node-ip: 172.31.217.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 05:49:32.998136   14012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubeadm
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubectl
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubelet
	I0624 05:49:33.017527   14012 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 05:49:33.030864   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 05:49:33.040697   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0624 05:49:33.079203   14012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:49:33.107622   14012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0624 05:49:33.161512   14012 ssh_runner.go:195] Run: grep 172.31.217.139	control-plane.minikube.internal$ /etc/hosts
	I0624 05:49:33.168275   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.217.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:49:33.205294   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:33.390514   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:49:33.413655   14012 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.217.139
	I0624 05:49:33.413655   14012 certs.go:194] generating shared ca certs ...
	I0624 05:49:33.413655   14012 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.420300   14012 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:49:33.420962   14012 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:49:33.421162   14012 certs.go:256] generating profile certs ...
	I0624 05:49:33.422002   14012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.key
	I0624 05:49:33.422002   14012 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81
	I0624 05:49:33.422002   14012 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.217.139]
	I0624 05:49:33.687208   14012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 ...
	I0624 05:49:33.687208   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81: {Name:mke29aa285d1480a4c0ffe6b00fae4b653965b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.696105   14012 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81 ...
	I0624 05:49:33.696105   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81: {Name:mk9bb0a6fbcaf4c73bc8f11ba3bdac939b7058e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.697907   14012 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt
	I0624 05:49:33.714027   14012 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key
	I0624 05:49:33.715812   14012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key
	I0624 05:49:33.715847   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:49:33.716067   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:49:33.716196   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:49:33.716196   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:49:33.716599   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 05:49:33.716959   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 05:49:33.717117   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 05:49:33.717117   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 05:49:33.717702   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:49:33.718538   14012 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:49:33.718538   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:49:33.718538   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:49:33.719380   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:49:33.719720   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:49:33.720266   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:49:33.720530   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:49:33.720714   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:49:33.720901   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:33.722347   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:49:33.769893   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:49:33.822980   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:49:33.871005   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:49:33.919551   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0624 05:49:33.968621   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 05:49:34.017050   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 05:49:34.061235   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0624 05:49:34.108648   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:49:34.154736   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:49:34.198680   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:49:34.248395   14012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 05:49:34.296627   14012 ssh_runner.go:195] Run: openssl version
	I0624 05:49:34.305304   14012 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:49:34.318218   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:49:34.354304   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:49:34.362303   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:49:34.362303   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:49:34.374333   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:49:34.383618   14012 command_runner.go:130] > 51391683
	I0624 05:49:34.398596   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:49:34.430042   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:49:34.459015   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.466217   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.466217   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.479295   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.486871   14012 command_runner.go:130] > 3ec20f2e
	I0624 05:49:34.503042   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:49:34.538978   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:49:34.571703   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.578295   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.578361   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.591714   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.594884   14012 command_runner.go:130] > b5213941
	I0624 05:49:34.613366   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:49:34.645755   14012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:49:34.654748   14012 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:49:34.654748   14012 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0624 05:49:34.654748   14012 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0624 05:49:34.654748   14012 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0624 05:49:34.654748   14012 command_runner.go:130] > Access: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] > Modify: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] > Change: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] >  Birth: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.668786   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 05:49:34.678596   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.691419   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 05:49:34.701314   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.714808   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 05:49:34.723729   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.737365   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 05:49:34.746988   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.759902   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 05:49:34.765490   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.787137   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 05:49:34.789296   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.796838   14012 kubeadm.go:391] StartCluster: {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:49:34.805876   14012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 05:49:34.856777   14012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 05:49:34.876776   14012 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0624 05:49:34.876845   14012 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0624 05:49:34.876910   14012 command_runner.go:130] > /var/lib/minikube/etcd:
	I0624 05:49:34.876910   14012 command_runner.go:130] > member
	W0624 05:49:34.876975   14012 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 05:49:34.877006   14012 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 05:49:34.877129   14012 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 05:49:34.890260   14012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 05:49:34.909364   14012 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 05:49:34.910063   14012 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-876600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:34.910972   14012 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-876600" cluster setting kubeconfig missing "multinode-876600" context setting]
	I0624 05:49:34.912417   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:34.926855   14012 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:34.927834   14012 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.217.139:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:49:34.929776   14012 cert_rotation.go:137] Starting client certificate rotation controller
	I0624 05:49:34.941729   14012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 05:49:34.959286   14012 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0624 05:49:34.959615   14012 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0624 05:49:34.959733   14012 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0624 05:49:34.959733   14012 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0624 05:49:34.959733   14012 command_runner.go:130] >  kind: InitConfiguration
	I0624 05:49:34.959792   14012 command_runner.go:130] >  localAPIEndpoint:
	I0624 05:49:34.959792   14012 command_runner.go:130] > -  advertiseAddress: 172.31.211.219
	I0624 05:49:34.959792   14012 command_runner.go:130] > +  advertiseAddress: 172.31.217.139
	I0624 05:49:34.959792   14012 command_runner.go:130] >    bindPort: 8443
	I0624 05:49:34.959792   14012 command_runner.go:130] >  bootstrapTokens:
	I0624 05:49:34.959836   14012 command_runner.go:130] >    - groups:
	I0624 05:49:34.959836   14012 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0624 05:49:34.959836   14012 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0624 05:49:34.959836   14012 command_runner.go:130] >    name: "multinode-876600"
	I0624 05:49:34.959869   14012 command_runner.go:130] >    kubeletExtraArgs:
	I0624 05:49:34.959869   14012 command_runner.go:130] > -    node-ip: 172.31.211.219
	I0624 05:49:34.959869   14012 command_runner.go:130] > +    node-ip: 172.31.217.139
	I0624 05:49:34.959869   14012 command_runner.go:130] >    taints: []
	I0624 05:49:34.959869   14012 command_runner.go:130] >  ---
	I0624 05:49:34.959869   14012 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0624 05:49:34.959869   14012 command_runner.go:130] >  kind: ClusterConfiguration
	I0624 05:49:34.959869   14012 command_runner.go:130] >  apiServer:
	I0624 05:49:34.959869   14012 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.31.211.219"]
	I0624 05:49:34.959869   14012 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	I0624 05:49:34.959869   14012 command_runner.go:130] >    extraArgs:
	I0624 05:49:34.959869   14012 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0624 05:49:34.959869   14012 command_runner.go:130] >  controllerManager:
	I0624 05:49:34.959869   14012 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.31.211.219
	+  advertiseAddress: 172.31.217.139
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-876600"
	   kubeletExtraArgs:
	-    node-ip: 172.31.211.219
	+    node-ip: 172.31.217.139
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.31.211.219"]
	+  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0624 05:49:34.959869   14012 kubeadm.go:1154] stopping kube-system containers ...
	I0624 05:49:34.969168   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 05:49:35.002833   14012 command_runner.go:130] > 83a09faf1e2d
	I0624 05:49:35.002833   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:49:35.002833   14012 command_runner.go:130] > caf1b076e912
	I0624 05:49:35.002833   14012 command_runner.go:130] > b42fe71aa0d7
	I0624 05:49:35.002833   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:49:35.002833   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:49:35.002833   14012 command_runner.go:130] > 2f2af473df8a
	I0624 05:49:35.002833   14012 command_runner.go:130] > d072caca0861
	I0624 05:49:35.002833   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:49:35.002833   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:49:35.002833   14012 command_runner.go:130] > d781e9872808
	I0624 05:49:35.002964   14012 command_runner.go:130] > eefbf63a6c05
	I0624 05:49:35.002964   14012 command_runner.go:130] > 0449d7721b5b
	I0624 05:49:35.002964   14012 command_runner.go:130] > 5f89e0f2608f
	I0624 05:49:35.002964   14012 command_runner.go:130] > 6d1c3ec125c9
	I0624 05:49:35.002964   14012 command_runner.go:130] > 6184b2eb79fd
	I0624 05:49:35.003060   14012 docker.go:483] Stopping containers: [83a09faf1e2d f46bdc12472e caf1b076e912 b42fe71aa0d7 f74eb1beb274 b0dd966ee710 2f2af473df8a d072caca0861 7174bdea66e2 d7d8d18e1b11 d781e9872808 eefbf63a6c05 0449d7721b5b 5f89e0f2608f 6d1c3ec125c9 6184b2eb79fd]
	I0624 05:49:35.012183   14012 ssh_runner.go:195] Run: docker stop 83a09faf1e2d f46bdc12472e caf1b076e912 b42fe71aa0d7 f74eb1beb274 b0dd966ee710 2f2af473df8a d072caca0861 7174bdea66e2 d7d8d18e1b11 d781e9872808 eefbf63a6c05 0449d7721b5b 5f89e0f2608f 6d1c3ec125c9 6184b2eb79fd
	I0624 05:49:35.037744   14012 command_runner.go:130] > 83a09faf1e2d
	I0624 05:49:35.037832   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:49:35.037832   14012 command_runner.go:130] > caf1b076e912
	I0624 05:49:35.037832   14012 command_runner.go:130] > b42fe71aa0d7
	I0624 05:49:35.037963   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:49:35.037963   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:49:35.037963   14012 command_runner.go:130] > 2f2af473df8a
	I0624 05:49:35.038040   14012 command_runner.go:130] > d072caca0861
	I0624 05:49:35.038040   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:49:35.038040   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:49:35.038040   14012 command_runner.go:130] > d781e9872808
	I0624 05:49:35.038040   14012 command_runner.go:130] > eefbf63a6c05
	I0624 05:49:35.038040   14012 command_runner.go:130] > 0449d7721b5b
	I0624 05:49:35.038040   14012 command_runner.go:130] > 5f89e0f2608f
	I0624 05:49:35.038040   14012 command_runner.go:130] > 6d1c3ec125c9
	I0624 05:49:35.038040   14012 command_runner.go:130] > 6184b2eb79fd
	I0624 05:49:35.054794   14012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 05:49:35.096732   14012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 05:49:35.099603   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:49:35.114559   14012 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:49:35.114559   14012 kubeadm.go:156] found existing configuration files:
	
	I0624 05:49:35.126633   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 05:49:35.136093   14012 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:49:35.144982   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:49:35.159249   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 05:49:35.188074   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 05:49:35.190043   14012 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:49:35.205735   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:49:35.216427   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 05:49:35.246360   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 05:49:35.262403   14012 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:49:35.262763   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:49:35.276071   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 05:49:35.307365   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 05:49:35.323355   14012 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:49:35.324262   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:49:35.337531   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 05:49:35.369923   14012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 05:49:35.388354   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using the existing "sa" key
	I0624 05:49:35.702384   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.339007   14012 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 05:49:37.339143   14012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6367526s)
	I0624 05:49:37.339221   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.632544   14012 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 05:49:37.632642   14012 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 05:49:37.632642   14012 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0624 05:49:37.632715   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.725506   14012 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 05:49:37.725768   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.822436   14012 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 05:49:37.822599   14012 api_server.go:52] waiting for apiserver process to appear ...
	I0624 05:49:37.836555   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:38.351904   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:38.836201   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.343829   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.842660   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.862505   14012 command_runner.go:130] > 1846
	I0624 05:49:39.866272   14012 api_server.go:72] duration metric: took 2.0436984s to wait for apiserver process to appear ...
	I0624 05:49:39.866390   14012 api_server.go:88] waiting for apiserver healthz status ...
	I0624 05:49:39.866461   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.081334   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0624 05:49:43.081400   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0624 05:49:43.081400   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.111655   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0624 05:49:43.117294   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0624 05:49:43.374296   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.382481   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:43.382481   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:43.872578   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.897521   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:43.902445   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:44.372784   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:44.382947   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:44.383024   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:44.885781   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:44.897958   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:49:44.901078   14012 round_trippers.go:463] GET https://172.31.217.139:8443/version
	I0624 05:49:44.901177   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:44.901177   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:44.901177   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:44.915530   14012 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 05:49:44.915581   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:44.915581   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:44.915615   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:44.915615   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Content-Length: 263
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:44 GMT
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Audit-Id: 9ff5c67f-66f8-416b-8ddd-e8f42a33bd36
	I0624 05:49:44.915652   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:44.915687   14012 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0624 05:49:44.915843   14012 api_server.go:141] control plane version: v1.30.2
	I0624 05:49:44.915927   14012 api_server.go:131] duration metric: took 5.049479s to wait for apiserver health ...
	I0624 05:49:44.915927   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:49:44.915927   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:49:44.919402   14012 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0624 05:49:44.931920   14012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0624 05:49:44.953124   14012 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0624 05:49:44.953288   14012 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0624 05:49:44.953288   14012 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0624 05:49:44.953318   14012 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0624 05:49:44.953318   14012 command_runner.go:130] > Access: 2024-06-24 12:48:11.919340600 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] > Modify: 2024-06-21 04:42:41.000000000 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] > Change: 2024-06-24 12:48:00.203000000 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] >  Birth: -
	I0624 05:49:44.953318   14012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0624 05:49:44.953318   14012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0624 05:49:45.008863   14012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0624 05:49:46.237951   14012 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > daemonset.apps/kindnet configured
	I0624 05:49:46.238026   14012 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2291577s)
	I0624 05:49:46.238026   14012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 05:49:46.238026   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:49:46.238026   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.238026   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.238026   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.239306   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.245532   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.245532   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Audit-Id: a56a3580-71d2-4edb-8079-62f0a6d6f081
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.245630   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.245691   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.248019   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87788 chars]
	I0624 05:49:46.255228   14012 system_pods.go:59] 12 kube-system pods found
	I0624 05:49:46.255228   14012 system_pods.go:61] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0624 05:49:46.255228   14012 system_pods.go:61] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0624 05:49:46.255228   14012 system_pods.go:61] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0624 05:49:46.255228   14012 system_pods.go:74] duration metric: took 17.2023ms to wait for pod list to return data ...
	I0624 05:49:46.255228   14012 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:49:46.255228   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes
	I0624 05:49:46.255228   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.255228   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.255228   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.265017   14012 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 05:49:46.265017   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Audit-Id: eff60bee-cc73-4a09-98b1-1973870f0d6b
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.265017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.265017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.266281   14012 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0624 05:49:46.267717   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267772   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267885   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267922   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267922   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267922   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267922   14012 node_conditions.go:105] duration metric: took 12.6941ms to run NodePressure ...
	I0624 05:49:46.267922   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:46.733341   14012 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0624 05:49:46.733341   14012 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0624 05:49:46.733341   14012 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0624 05:49:46.733341   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0624 05:49:46.733341   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.733341   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.733341   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.735121   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.735121   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.735121   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.738205   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.738205   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Audit-Id: aa059ae8-70a5-4242-ae9d-77f31c39dd50
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.739655   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1762","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0624 05:49:46.741503   14012 kubeadm.go:733] kubelet initialised
	I0624 05:49:46.741558   14012 kubeadm.go:734] duration metric: took 8.217ms waiting for restarted kubelet to initialise ...
	I0624 05:49:46.741558   14012 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:49:46.741706   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:49:46.741706   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.741706   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.741825   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.745261   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:49:46.745261   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.745261   14012 round_trippers.go:580]     Audit-Id: 5ce3735f-33f4-404f-a2c8-99c3505dc970
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.745573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.745573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.747579   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87195 chars]
	I0624 05:49:46.751459   14012 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.752144   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:49:46.752183   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.752183   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.752221   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.754571   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:49:46.755653   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.755653   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.755653   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.755707   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.755707   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.755707   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.755707   14012 round_trippers.go:580]     Audit-Id: 1c3048bd-9a07-43c9-9dcc-6d02058758ef
	I0624 05:49:46.755954   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:49:46.756334   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.756334   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.756334   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.756334   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.757081   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.757081   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.757081   14012 round_trippers.go:580]     Audit-Id: c2192bba-0d48-47ce-9ce1-964ab92394dd
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.759421   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.759421   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.759742   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.760275   14012 pod_ready.go:97] node "multinode-876600" hosting pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.760356   14012 pod_ready.go:81] duration metric: took 8.3677ms for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.760356   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.760356   14012 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.760442   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:49:46.760527   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.760527   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.760527   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.770864   14012 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 05:49:46.770864   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.771619   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Audit-Id: 4df6a388-9901-44f9-969f-906354682c9d
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.771619   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.771791   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1762","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0624 05:49:46.772819   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.772930   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.772930   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.772969   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.773619   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.773619   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.775666   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.775666   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Audit-Id: 7da004c8-f997-47f9-a5d9-4d9cb3d09782
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.776131   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.776131   14012 pod_ready.go:97] node "multinode-876600" hosting pod "etcd-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.776131   14012 pod_ready.go:81] duration metric: took 15.7755ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.776131   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "etcd-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.776131   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.776658   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:49:46.776658   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.776813   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.776813   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.778416   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.778416   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.778416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.778416   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.778416   14012 round_trippers.go:580]     Audit-Id: 9c2bf27f-bded-46f4-8d2c-8b9064f1a39c
	I0624 05:49:46.779663   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.779663   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.779663   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.779924   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a1504b-2338-458c-b448-92e8836b479a","resourceVersion":"1763","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.217.139:8443","kubernetes.io/config.hash":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.mirror":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.seen":"2024-06-24T12:49:37.772966703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0624 05:49:46.780288   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.780288   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.780288   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.780288   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.781549   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.781549   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.781549   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Audit-Id: 2d1c38c8-4153-439c-a42f-f22e94257d2a
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.783415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.783487   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.784030   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-apiserver-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.784030   14012 pod_ready.go:81] duration metric: took 7.3721ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.784092   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-apiserver-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.784092   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.784202   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:49:46.784287   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.784287   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.784287   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.785087   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.785087   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.785087   14012 round_trippers.go:580]     Audit-Id: 5293c4ef-1b69-4418-8847-b8a462584079
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.786745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.786745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.786790   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"1757","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0624 05:49:46.787601   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.787601   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.787601   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.787601   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.788193   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.790419   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Audit-Id: c2fdb3fb-789c-4e1c-b2cc-4181091d3726
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.790488   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.790488   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.790488   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.790567   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.791096   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-controller-manager-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.791159   14012 pod_ready.go:81] duration metric: took 7.0671ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.791159   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-controller-manager-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.791159   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.935534   14012 request.go:629] Waited for 144.2269ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:49:46.935715   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:49:46.935799   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.935839   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.935839   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.936156   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.940161   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.940161   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.940161   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Audit-Id: dfded3e9-0ea8-49ac-8fa7-da4ed8dc7b19
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.940379   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hjjs8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e148504-3300-4591-9576-7c5597851f41","resourceVersion":"609","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0624 05:49:47.142674   14012 request.go:629] Waited for 201.1098ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:49:47.142957   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:49:47.142957   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.142957   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.142957   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.143685   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.143685   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.143685   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.143685   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.147896   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.147896   14012 round_trippers.go:580]     Audit-Id: 377747e2-e78e-4cdf-b4ab-381671704590
	I0624 05:49:47.147949   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.147949   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.147949   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"1674","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3827 chars]
	I0624 05:49:47.148493   14012 pod_ready.go:92] pod "kube-proxy-hjjs8" in "kube-system" namespace has status "Ready":"True"
	I0624 05:49:47.148696   14012 pod_ready.go:81] duration metric: took 357.4616ms for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.148728   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.348129   14012 request.go:629] Waited for 199.1996ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:49:47.348129   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:49:47.348250   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.348250   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.348250   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.354839   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:49:47.354839   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.354839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.354839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Audit-Id: 3c14a3b0-1610-4bc5-8cf8-f908c7669877
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.354839   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"1835","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0624 05:49:47.542334   14012 request.go:629] Waited for 186.5757ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:47.542387   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:47.542629   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.542629   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.542721   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.543502   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.543502   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.543502   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.543502   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Audit-Id: c0b9f831-96e5-4efc-beda-c9e73d9e5f13
	I0624 05:49:47.546154   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:47.546865   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-proxy-lcc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:47.546978   14012 pod_ready.go:81] duration metric: took 398.2486ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:47.546978   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-proxy-lcc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:47.546978   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.735773   14012 request.go:629] Waited for 188.5213ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:49:47.735963   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:49:47.735963   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.736090   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.736090   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.743317   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:49:47.743745   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Audit-Id: 3ceeecb1-acd1-40b6-8cf9-efad270c8bae
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.743816   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.743816   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.743816   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.744000   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf7jm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4f99ace-bf94-40d8-b28f-27ec938418ef","resourceVersion":"1727","creationTimestamp":"2024-06-24T12:34:19Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:49:47.935486   14012 request.go:629] Waited for 190.3271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:49:47.935486   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:49:47.935715   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.935758   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.935758   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.936154   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.939520   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.939520   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.939520   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.939520   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.939520   14012 round_trippers.go:580]     Audit-Id: 693e992d-c4b1-4cf0-8816-7355a1b8a0ec
	I0624 05:49:47.939575   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.939575   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.939619   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m03","uid":"1392cc6a-2e48-4bde-9120-b3d99174bf99","resourceVersion":"1740","creationTimestamp":"2024-06-24T12:45:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_45_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0624 05:49:47.940202   14012 pod_ready.go:97] node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:49:47.940202   14012 pod_ready.go:81] duration metric: took 393.1448ms for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:47.940202   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:49:47.940202   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:48.142457   14012 request.go:629] Waited for 202.2549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:49:48.142790   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:49:48.142790   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.142790   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.142790   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.143163   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:48.143163   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.143163   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Audit-Id: dd96a694-b962-400c-ad11-53306a39e259
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.147649   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.147649   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.147979   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"1760","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0624 05:49:48.335881   14012 request.go:629] Waited for 186.9997ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.335881   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.335881   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.335881   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.335881   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.336537   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:48.340727   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Audit-Id: 82cc05b1-d644-483a-8cdd-575c6e2cbf34
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.340727   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.340808   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.340808   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.341020   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:48.341184   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-scheduler-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:48.341184   14012 pod_ready.go:81] duration metric: took 400.9813ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:48.341184   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-scheduler-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:48.341184   14012 pod_ready.go:38] duration metric: took 1.5995451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:49:48.341184   14012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 05:49:48.362051   14012 command_runner.go:130] > -16
	I0624 05:49:48.362145   14012 ops.go:34] apiserver oom_adj: -16
	I0624 05:49:48.362145   14012 kubeadm.go:591] duration metric: took 13.4849654s to restartPrimaryControlPlane
	I0624 05:49:48.362145   14012 kubeadm.go:393] duration metric: took 13.565256s to StartCluster
	I0624 05:49:48.362275   14012 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:48.362468   14012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:48.364136   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:48.366220   14012 start.go:234] Will wait 6m0s for node &{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 05:49:48.366220   14012 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 05:49:48.366766   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:49:48.369338   14012 out.go:177] * Verifying Kubernetes components...
	I0624 05:49:48.372128   14012 out.go:177] * Enabled addons: 
	I0624 05:49:48.378854   14012 addons.go:510] duration metric: took 12.6342ms for enable addons: enabled=[]
	I0624 05:49:48.383089   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:48.640440   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:49:48.660134   14012 node_ready.go:35] waiting up to 6m0s for node "multinode-876600" to be "Ready" ...
	I0624 05:49:48.668691   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.668691   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.668871   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.668871   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.676881   14012 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 05:49:48.676950   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Audit-Id: d7c5b015-4730-4db9-a03d-a722d4567614
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.677005   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.677005   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.677005   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.678047   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:49.165924   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:49.165924   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:49.165924   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:49.165924   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:49.166456   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:49.166456   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:49.170656   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:49 GMT
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Audit-Id: 62a7f463-c9ac-449d-bed2-21d28f00eb10
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:49.170656   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:49.170984   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:49.669857   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:49.669972   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:49.669972   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:49.669972   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:49.670337   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:49.670337   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Audit-Id: 62a6f727-469d-4dea-89bb-8d1a77b11a57
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:49.670337   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:49.670337   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:49 GMT
	I0624 05:49:49.675142   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.173144   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:50.173144   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:50.173144   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:50.173144   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:50.173941   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:50.177995   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:50.177995   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:50.177995   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:50 GMT
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Audit-Id: 484a1611-7fd8-4219-8445-4a9fab11bbd9
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:50.178741   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.663728   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:50.663970   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:50.663970   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:50.663970   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:50.664386   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:50.664386   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Audit-Id: 260f069b-5ebd-4ad9-987a-316613cc0a64
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:50.664386   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:50.664386   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:50 GMT
	I0624 05:49:50.668776   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.669114   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:51.168134   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:51.168374   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:51.168374   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:51.168374   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:51.169194   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:51.169194   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:51.174228   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:51 GMT
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Audit-Id: 261fd437-590d-43df-99d7-4da139b5e3f2
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:51.174228   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:51.174487   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:51.667217   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:51.667314   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:51.667314   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:51.667314   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:51.667670   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:51.667670   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:51.667670   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:51 GMT
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Audit-Id: 7d3c2e1a-149b-453d-9291-9b7b9bc2dcd5
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:51.670643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:51.670643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:51.671088   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.176750   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:52.176815   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:52.176815   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:52.176815   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:52.182049   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:49:52.182049   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:52.182049   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:52.182620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:52.182620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:52 GMT
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Audit-Id: 7bcf4e3c-b31c-42a5-9675-aef8c3a0a298
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:52.182762   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.673135   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:52.673314   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:52.673314   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:52.673314   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:52.673671   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:52.673671   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:52.673671   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:52 GMT
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Audit-Id: 299cc14e-23f4-49b8-a134-ded2343cf342
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:52.673671   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:52.678451   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.679756   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:53.175756   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:53.175928   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:53.175928   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:53.175928   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:53.176750   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:53.176750   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:53.176750   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:53.176750   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:53.180134   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:53.180134   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:53.180134   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:53 GMT
	I0624 05:49:53.180134   14012 round_trippers.go:580]     Audit-Id: 1f380ba4-5bb0-4891-adb8-119db425f568
	I0624 05:49:53.180386   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:53.669943   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:53.670048   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:53.670048   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:53.670048   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:53.670464   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:53.670464   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:53.670464   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:53.670464   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:53 GMT
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Audit-Id: 6c7815bd-1e19-4e1b-9378-30fb9662db04
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:53.673585   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:54.166632   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:54.166849   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:54.166849   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:54.166849   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:54.167093   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:54.170706   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Audit-Id: 5f8841c8-88ad-401f-a5b8-ace419a6ff17
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:54.170706   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:54.170706   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:54 GMT
	I0624 05:49:54.170946   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:54.662878   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:54.663166   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:54.663166   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:54.663166   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:54.663701   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:54.672979   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:54.672979   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:54.672979   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:54 GMT
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Audit-Id: 31ba798a-f2e4-43c7-bbcc-d0e0f697f30d
	I0624 05:49:54.673397   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:55.164099   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:55.164448   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:55.164448   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:55.164448   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:55.164831   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:55.164831   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:55.164831   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:55 GMT
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Audit-Id: b4cd0ced-8f7d-4f98-9778-234f1e8f06c5
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:55.164831   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:55.170148   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:55.170616   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:55.674312   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:55.674312   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:55.674863   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:55.674863   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:55.692363   14012 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0624 05:49:55.692607   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:55.692607   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:55.692607   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:55 GMT
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Audit-Id: 6cb6be55-f5af-48db-b21c-4b1990795fc4
	I0624 05:49:55.695598   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:56.160848   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:56.161145   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:56.161145   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:56.161145   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:56.161418   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:56.161418   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:56.166165   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:56.166165   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:56 GMT
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Audit-Id: 920a34a9-2fb6-4283-8d99-8eefd5d38269
	I0624 05:49:56.166433   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:56.668119   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:56.668119   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:56.668119   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:56.668119   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:56.675763   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:49:56.675763   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Audit-Id: d006fba3-00d7-4cbd-885c-f6c5f48ec508
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:56.675861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:56.675861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:56.675900   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:56 GMT
	I0624 05:49:56.675900   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.162946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:57.162946   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:57.162946   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:57.162946   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:57.163517   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:57.163517   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Audit-Id: de529e7a-3d3a-4af0-be7b-753914b8677e
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:57.163517   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:57.163517   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:57 GMT
	I0624 05:49:57.167844   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.666210   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:57.666210   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:57.666296   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:57.666296   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:57.666767   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:57.669889   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:57.669889   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:57 GMT
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Audit-Id: 1615bc6a-876f-4aa6-ad89-4f20b16c94b4
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:57.669889   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:57.670243   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.670862   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:58.162146   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:58.162146   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:58.162146   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:58.162452   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:58.162694   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:58.162694   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:58.167008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:58.167008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:58 GMT
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Audit-Id: 90a978f6-34a0-4eb3-9ad4-3a73e42e4135
	I0624 05:49:58.167267   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:58.665100   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:58.665185   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:58.665185   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:58.665185   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:58.665913   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:58.665913   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:58 GMT
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Audit-Id: 118e35cb-9e0e-4441-9bc5-5c0f366bb75b
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:58.668672   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:58.668672   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:58.668672   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:58.668893   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.180071   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:59.180071   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:59.180071   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:59.180071   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:59.180596   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:59.184430   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:59.184430   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:59.184548   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:59 GMT
	I0624 05:49:59.184548   14012 round_trippers.go:580]     Audit-Id: b6b87fed-c625-49f8-8148-c7f7ec7476f0
	I0624 05:49:59.184639   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:59.184639   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:59.184639   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:59.184639   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.671231   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:59.671231   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:59.671303   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:59.671303   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:59.674918   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:49:59.674918   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:59.674918   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:59.674918   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:59 GMT
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Audit-Id: 0b4da115-a3a6-4ed2-849f-7c56ca5bf742
	I0624 05:49:59.675782   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.676281   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:00.171540   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:00.171540   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:00.171657   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:00.171657   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:00.177224   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:00.177224   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:00.177377   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:00.177377   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:00 GMT
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Audit-Id: 56fc1926-a0ef-4508-912f-1cb667d5e3c2
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:00.177685   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:00.670950   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:00.670950   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:00.671094   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:00.671094   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:00.675746   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:00.675746   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:00.675746   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:00.675746   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:00.675746   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:00.675941   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:00.675941   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:00 GMT
	I0624 05:50:00.675941   14012 round_trippers.go:580]     Audit-Id: 908c81d2-d416-4734-b52a-ed5183c0f41c
	I0624 05:50:00.676331   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:01.168397   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:01.168587   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:01.168587   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:01.168587   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:01.172185   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:01.173199   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:01.173225   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:01.173225   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:01 GMT
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Audit-Id: a955f5df-b0e7-4999-9a3a-a48ea9af8c65
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:01.173884   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:01.669561   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:01.669832   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:01.669832   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:01.669832   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:01.673416   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:01.674088   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:01.674088   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:01 GMT
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Audit-Id: d7376207-fbf1-4cb3-b487-cb31eabbb66d
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:01.674088   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:01.674916   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:02.171837   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:02.171990   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:02.171990   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:02.171990   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:02.177779   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:02.177779   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:02.177779   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:02.177779   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:02 GMT
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Audit-Id: cc3f2cfb-8a55-4e64-a643-73982e9bc09b
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:02.178161   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:02.178731   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:02.671829   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:02.671959   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:02.671959   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:02.671959   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:02.676453   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:02.676453   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:02.676453   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:02.676453   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:02 GMT
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Audit-Id: 56700678-8efa-44f3-8d6a-7311f1aa20c0
	I0624 05:50:02.677240   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:03.169974   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:03.170057   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:03.170057   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:03.170057   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:03.173940   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:03.174416   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:03.174416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:03.174416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:03 GMT
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Audit-Id: 0986f639-a4b4-48a3-bea9-ba2abec3acdc
	I0624 05:50:03.174416   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:03.670342   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:03.670409   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:03.670409   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:03.670478   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:03.677319   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:03.677319   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:03.677319   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:03.677319   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:03 GMT
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Audit-Id: 27f43b91-b04f-4b99-ac99-6fe888b12ba5
	I0624 05:50:03.678836   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.169015   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:04.169015   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:04.169015   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:04.169015   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:04.172627   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:04.172627   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:04.172627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:04 GMT
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Audit-Id: aa8121e1-631a-4a44-b15c-6ad9047b0bcb
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:04.172627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:04.173791   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.671639   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:04.671639   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:04.671639   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:04.671639   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:04.675238   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:04.675238   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:04.675238   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:04.675238   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:04 GMT
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Audit-Id: 8fab93ea-cce9-470e-8a0e-085eeb9b272e
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:04.675238   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.676231   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:05.169701   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:05.169701   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:05.169701   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:05.169701   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:05.174374   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:05.174374   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:05 GMT
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Audit-Id: a4324e0d-a73f-4c5f-85d7-9e3da2a74739
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:05.174374   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:05.174374   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:05.174882   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:05.669404   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:05.669404   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:05.669404   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:05.669404   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:05.674295   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:05.674295   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:05 GMT
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Audit-Id: 110874d4-fb32-4ed1-8b82-26979e7f8f2c
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:05.674379   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:05.674379   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:05.674634   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:06.167589   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:06.167892   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:06.167892   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:06.167892   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:06.172489   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:06.172489   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:06.172489   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:06.172489   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:06 GMT
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Audit-Id: 0395f95b-8fb8-4463-8699-afc27b3cd268
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:06.173114   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:06.667281   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:06.667498   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:06.667498   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:06.667498   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:06.671591   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:06.671591   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:06.671591   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:06.671591   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:06.671591   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:06.671778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:06.671778   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:06 GMT
	I0624 05:50:06.671778   14012 round_trippers.go:580]     Audit-Id: 0749ebc1-22b1-47e4-bdd9-2221f6be7be0
	I0624 05:50:06.673043   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:07.166356   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:07.166356   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:07.166356   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:07.166356   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:07.169949   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:07.170949   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:07 GMT
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Audit-Id: aaf62023-da6b-468d-a452-aa8305778f5b
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:07.171020   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:07.171020   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:07.171519   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:07.172150   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:07.667291   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:07.667528   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:07.667528   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:07.667528   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:07.674865   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:07.674865   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:07.674865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:07 GMT
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Audit-Id: 0e7079ed-d2f9-40a7-ae35-5c0a6826773d
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:07.674865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:07.674865   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:08.168844   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:08.168844   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:08.168844   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:08.168844   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:08.173800   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:08.174349   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:08.174349   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:08.174349   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:08 GMT
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Audit-Id: 89045b9b-a8bf-48e4-b7b5-89ae293d61c8
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:08.174424   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:08.175196   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:08.667807   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:08.668039   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:08.668039   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:08.668039   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:08.671291   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:08.671946   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Audit-Id: b1946d1b-d1e8-4bab-8f71-9e5b66952410
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:08.671946   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:08.671946   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:08 GMT
	I0624 05:50:08.672208   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:09.170511   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:09.170598   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:09.170598   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:09.170710   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:09.174259   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:09.174974   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Audit-Id: 7be20a32-be06-4121-8209-bbee987eea43
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:09.175055   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:09.175055   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:09 GMT
	I0624 05:50:09.175236   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:09.175872   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:09.672238   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:09.672238   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:09.672238   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:09.672238   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:09.676705   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:09.677145   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:09.677145   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:09 GMT
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Audit-Id: 49a0505d-a0e9-479c-b880-aca3b0d87646
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:09.677224   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:09.677224   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:10.170926   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:10.170926   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:10.170926   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:10.170926   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:10.174551   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:10.175511   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:10.175511   14012 round_trippers.go:580]     Audit-Id: a25e9bdd-31fb-4fa0-95bb-5d23174459e5
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:10.175548   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:10.175548   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:10 GMT
	I0624 05:50:10.175751   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:10.673649   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:10.673832   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:10.673832   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:10.673832   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:10.677738   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:10.678073   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:10 GMT
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Audit-Id: a7ada07d-c756-4e9c-867e-28f5092c6321
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:10.678073   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:10.678073   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:10.678872   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.161573   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:11.161960   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:11.161960   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:11.161960   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:11.166074   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:11.166074   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:11.166074   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:11.166074   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:11 GMT
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Audit-Id: 90d0b419-1af7-48a7-85f5-c46e4e424fb3
	I0624 05:50:11.166074   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.661847   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:11.661938   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:11.661938   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:11.661938   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:11.666803   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:11.666803   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Audit-Id: 00df1833-64d0-4cce-a5c0-7fb38af0e0ee
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:11.666803   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:11.666803   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:11 GMT
	I0624 05:50:11.667622   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.668152   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:12.175756   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:12.175756   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:12.175982   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:12.175982   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:12.179544   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:12.180293   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:12.180293   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:12.180293   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:12 GMT
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Audit-Id: bb479baa-2ab8-4ecb-9e0a-ae833c7e6680
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:12.180492   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:12.674727   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:12.674727   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:12.674727   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:12.674727   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:12.677356   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:12.677356   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Audit-Id: 3240f605-ab36-462a-b935-b5371031a773
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:12.677356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:12.677356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:12 GMT
	I0624 05:50:12.679399   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:13.173187   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:13.173187   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:13.173187   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:13.173187   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:13.176778   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:13.176778   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:13.176778   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:13.176778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:13.176778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:13 GMT
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Audit-Id: f79f4075-8b79-4e22-abda-78a3c14bdd11
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:13.177947   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:13.660489   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:13.660892   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:13.660972   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:13.660972   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:13.665657   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:13.665989   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Audit-Id: f1ae38b2-4293-4b50-ae9c-89dceb3a9d87
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:13.665989   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:13.665989   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:13 GMT
	I0624 05:50:13.666914   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:14.171787   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:14.171787   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:14.171787   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:14.171787   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:14.174639   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:14.175693   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:14.175693   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:14 GMT
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Audit-Id: fac1a829-354d-4afc-b58f-aa14ebe356f8
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:14.175770   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:14.175770   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:14.176405   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:14.177160   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:14.671000   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:14.671000   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:14.671070   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:14.671070   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:14.674715   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:14.674715   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:14 GMT
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Audit-Id: 064ee59b-16a0-40aa-b5df-450d9f1c371e
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:14.674715   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:14.674715   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:14.676532   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:15.171724   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:15.171724   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:15.171927   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:15.171927   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:15.175770   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:15.176620   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:15.176620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:15 GMT
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Audit-Id: 5125f241-aab9-4c8f-8c2e-4aebbcc5fac7
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:15.176620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:15.176620   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:15.674193   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:15.674480   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:15.674480   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:15.674557   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:15.678175   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:15.678175   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:15.678175   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:15.678175   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:15 GMT
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Audit-Id: 866b1469-019b-4e69-acd8-f7d4a988a00e
	I0624 05:50:15.678899   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.164296   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:16.164296   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:16.164296   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:16.164296   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:16.169120   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:16.169519   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:16 GMT
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Audit-Id: a421ef8b-ecc6-45c6-953d-c6a354c29a3c
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:16.169519   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:16.169519   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:16.169727   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.665441   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:16.665441   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:16.665787   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:16.665787   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:16.670109   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:16.670109   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:16.670109   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:16.670109   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:16.670109   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:16 GMT
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Audit-Id: 95ff4e68-19bd-4974-8128-20ca75144d12
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:16.670999   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.671867   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:17.172606   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:17.172606   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:17.172606   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:17.172606   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:17.176405   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:17.176405   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Audit-Id: 3bef91af-a293-40e0-a995-1629c31f3b18
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:17.177283   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:17.177283   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:17.177283   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:17 GMT
	I0624 05:50:17.177559   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:17.670989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:17.670989   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:17.670989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:17.671123   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:17.674452   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:17.674452   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:17.674452   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:17.674452   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:17.674452   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:17 GMT
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Audit-Id: 1cbb287f-7522-47e1-9c19-1025690b7dda
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:17.675365   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.174338   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:18.174338   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:18.174436   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:18.174436   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:18.178161   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:18.178535   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:18.178535   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:18.178535   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:18 GMT
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Audit-Id: d40aa7ee-823b-47d5-bc58-6a266d2014de
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:18.178535   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.675785   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:18.675785   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:18.675785   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:18.675785   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:18.683180   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:18.683180   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:18.683180   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:18.683180   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:18 GMT
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Audit-Id: c1c5f106-8963-4032-b7bd-d4c36899d37e
	I0624 05:50:18.683953   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.683995   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:19.172788   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:19.172788   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:19.172788   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:19.172788   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:19.176367   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:19.176367   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:19 GMT
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Audit-Id: 5dbbd0c1-58f2-4690-90bc-0a5b31a5e3b1
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:19.177367   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:19.177367   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:19.177544   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:19.670865   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:19.670865   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:19.670865   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:19.670865   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:19.675521   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:19.675521   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Audit-Id: 25103891-a44d-48cf-94cf-48dd326b8222
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:19.675521   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:19.675521   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:19 GMT
	I0624 05:50:19.675943   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:20.169540   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:20.169540   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:20.169624   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:20.169624   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:20.172881   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:20.173531   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Audit-Id: 9bb9ed5b-64b0-4365-b6e2-f6478b564542
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:20.173531   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:20.173531   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:20 GMT
	I0624 05:50:20.173828   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:20.668816   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:20.668904   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:20.668904   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:20.668904   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:20.672251   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:20.672456   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Audit-Id: 2b7cd664-a34f-4102-8de9-e641ccb068db
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:20.672456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:20.672456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:20 GMT
	I0624 05:50:20.672456   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:21.167670   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:21.167670   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:21.167670   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:21.167670   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:21.171960   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:21.172791   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:21.172791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:21.172791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:21 GMT
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Audit-Id: d9a0a253-8b6c-4b5d-837f-6f38b83b245a
	I0624 05:50:21.172976   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:21.173640   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:21.667815   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:21.667815   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:21.667815   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:21.667815   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:21.671414   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:21.672118   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Audit-Id: c48c8792-da4d-4c89-ae05-48a7658137fb
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:21.672118   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:21.672118   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:21 GMT
	I0624 05:50:21.672281   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:22.167651   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:22.167651   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:22.167651   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:22.167651   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:22.171237   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:22.171237   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:22.172123   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:22.172123   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:22 GMT
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Audit-Id: e9b3710d-cb0d-4072-ad75-0ac38bee1528
	I0624 05:50:22.172327   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:22.666089   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:22.666234   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:22.666234   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:22.666234   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:22.672882   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:22.672882   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Audit-Id: 33d59337-f7d2-408f-8ece-49433fc51ab1
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:22.672882   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:22.672882   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:22 GMT
	I0624 05:50:22.673626   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.168025   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:23.168025   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:23.168025   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:23.168025   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:23.171603   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:23.171603   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:23.171603   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:23.171603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:23.171603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:23.171603   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:23 GMT
	I0624 05:50:23.172503   14012 round_trippers.go:580]     Audit-Id: 378aecaa-32a2-4e64-8c30-9c6076e3d44e
	I0624 05:50:23.172503   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:23.172577   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.667920   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:23.667920   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:23.667920   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:23.667920   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:23.672356   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:23.672356   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:23.672356   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:23.672356   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:23.672356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:23.673197   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:23.673197   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:23 GMT
	I0624 05:50:23.673197   14012 round_trippers.go:580]     Audit-Id: 402f4f65-55c7-4740-936c-51b34d2ff8db
	I0624 05:50:23.673436   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.674038   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:24.165837   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.166185   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.166185   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.166331   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.170047   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:24.170099   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.170099   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.170186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Audit-Id: a6d1ea4a-a4c3-4cd4-a4c0-c215014723e5
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.170186   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:24.665215   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.665215   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.665215   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.665215   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.680661   14012 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 05:50:24.681213   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Audit-Id: da8499e4-8a12-4d0e-8209-67eb1c36e8c3
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.681284   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.681284   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.681284   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.681513   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:24.682078   14012 node_ready.go:49] node "multinode-876600" has status "Ready":"True"
	I0624 05:50:24.682305   14012 node_ready.go:38] duration metric: took 36.0218111s for node "multinode-876600" to be "Ready" ...
	I0624 05:50:24.682305   14012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:50:24.682432   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:50:24.682432   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.682508   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.682508   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.688138   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:24.688861   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.688861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Audit-Id: 2eb217b4-4739-4110-ad39-c0f3608cf259
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.688861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.690532   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1917"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86634 chars]
	I0624 05:50:24.694829   14012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:24.695077   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:24.695077   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.695077   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.695077   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.697739   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:24.697739   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.697739   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.697739   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.698458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.698458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.698458   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.698458   14012 round_trippers.go:580]     Audit-Id: b5d49757-c933-4f1a-af65-38cbae38e997
	I0624 05:50:24.698615   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:24.699730   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.699730   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.699730   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.699814   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.712186   14012 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0624 05:50:24.712186   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Audit-Id: c2446dc4-9b69-4d92-b048-79c73eee8c71
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.712186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.712186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.712747   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:25.201240   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:25.201306   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.201306   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.201306   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.204762   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:25.205784   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.205784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Audit-Id: ea0edc87-b213-4085-9e92-25ae2e8cb757
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.205784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.206037   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:25.206748   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:25.206748   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.206748   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.206748   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.209613   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:25.209613   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.209613   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.210303   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.210303   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Audit-Id: 8e8e1984-5907-4d74-8c5e-9f9f0707450b
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.210501   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:25.699513   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:25.699769   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.699769   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.699769   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.706595   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:25.706595   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Audit-Id: f08b3e84-dd33-4e36-938e-b080e07aea16
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.706595   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.706595   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.706595   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:25.707286   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:25.707286   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.707833   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.707833   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.710473   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:25.710473   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.710848   14012 round_trippers.go:580]     Audit-Id: 5d1a93a6-5a9d-4280-9db5-701c4781644b
	I0624 05:50:25.710848   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.710938   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.710938   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.710938   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.710938   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.711348   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:26.200198   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:26.200391   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.200391   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.200391   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.204737   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:26.204737   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Audit-Id: 06a287d8-3587-4edb-855b-8d9c93bd7f26
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.205443   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.205443   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.205522   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:26.206305   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:26.206305   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.206437   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.206437   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.209199   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:26.209199   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.209199   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.209199   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.209199   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.209199   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.209767   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.209767   14012 round_trippers.go:580]     Audit-Id: 664c76fc-77ca-4bc3-913f-4f995ab0cb86
	I0624 05:50:26.210063   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:26.696827   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:26.697017   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.697017   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.697017   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.700596   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:26.701652   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.701652   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.701652   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Audit-Id: ff0c947a-4028-4f3d-822d-9f01c1afd2c2
	I0624 05:50:26.702636   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:26.703683   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:26.703683   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.703683   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.703683   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.706269   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:26.706269   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Audit-Id: 6f23babe-d9c7-4d42-82e0-f85251d35b13
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.706716   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.706716   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.706785   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.706785   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:26.707553   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:27.197915   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:27.198031   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.198031   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.198031   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.201446   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:27.201446   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.201446   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.201446   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.202252   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Audit-Id: 0fa62ad7-bf23-481d-9dd5-08191ac0ec4f
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.203063   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:27.203659   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:27.203837   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.203837   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.203837   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.206717   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:27.206717   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.206717   14012 round_trippers.go:580]     Audit-Id: a20fc323-db9b-45c5-8970-9218bee9e9b5
	I0624 05:50:27.206717   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.207035   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.207035   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.207035   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.207035   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.207233   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:27.700989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:27.700989   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.700989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.700989   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.705642   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:27.705642   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Audit-Id: afdc4acb-72dc-4458-8736-65ae25f45eec
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.705879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.705879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.706086   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:27.706840   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:27.706840   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.706840   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.706840   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.709463   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:27.709865   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Audit-Id: d4cf9f94-a68c-42de-8f38-cdfc1bf7dc0b
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.709865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.709865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.709865   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.205427   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:28.205427   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.205427   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.205427   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.209015   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.209867   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.209867   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Audit-Id: 4c439a1c-21a7-4130-8e08-01842e816e0b
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.209867   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.210143   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:28.210898   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:28.210898   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.210898   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.210898   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.213498   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:28.213498   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.213498   14012 round_trippers.go:580]     Audit-Id: a21d0dff-b65c-4568-84a0-ae70f339f4de
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.214128   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.214128   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.214193   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.706818   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:28.707028   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.707028   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.707028   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.710540   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.711539   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.711539   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.711539   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Audit-Id: ad42d109-8160-4359-8801-8b87ec0f3246
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.711786   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:28.713220   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:28.713220   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.713220   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.713220   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.716354   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.716354   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Audit-Id: 0981b4a8-eafc-4317-a718-35891a327842
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.716354   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.716354   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.716854   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.717288   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:29.205824   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:29.205824   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.205824   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.205824   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.209409   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.209409   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.209409   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.209409   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Audit-Id: 4f2b54b6-6fb2-485f-a18f-c0d4851a8442
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.210307   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.210514   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:29.211425   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:29.211479   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.211479   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.211479   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.215378   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.215378   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.215378   14012 round_trippers.go:580]     Audit-Id: 0c3f0872-2a39-4f40-a16e-c279ba17dacd
	I0624 05:50:29.215378   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.215734   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.215734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.215734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.215734   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.215805   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:29.707929   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:29.707929   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.707929   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.707929   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.711499   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.712287   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.712287   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.712287   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Audit-Id: 34f24d0b-61bd-43fc-ab79-953ecae903ef
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.712489   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:29.713428   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:29.713499   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.713499   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.713499   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.716930   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.716930   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Audit-Id: a94b4251-0329-450f-ad40-2bca3ec91384
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.717321   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.717321   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.717814   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:30.200411   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:30.200495   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.200495   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.200495   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.204954   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:30.204954   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Audit-Id: d421138f-9626-4abc-ac15-92729819e340
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.204954   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.204954   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.205912   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:30.206744   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:30.206744   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.206744   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.206744   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.210402   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:30.210402   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.210603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.210603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Audit-Id: 145a0ab9-3781-4499-bb99-6dd25eacb5f8
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.211162   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:30.698512   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:30.698751   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.698751   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.698751   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.702362   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:30.702362   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.702362   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.702362   14012 round_trippers.go:580]     Audit-Id: 329b231c-1991-48a8-b309-d8337234b734
	I0624 05:50:30.702839   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.702839   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.702839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.702839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.703226   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:30.703980   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:30.703980   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.703980   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.703980   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.706573   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:30.706573   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.706573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Audit-Id: 96a23bf9-9d98-4b0e-a6a3-966db6111d70
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.706573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.707753   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:31.200516   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:31.200516   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.200516   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.200716   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.204154   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:31.204154   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Audit-Id: b387e48c-1685-4c7d-9905-16dc24a703d2
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.204154   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.204154   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.205158   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:31.205158   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:31.205158   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.205158   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.205158   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.210162   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:31.210162   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.211022   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.211022   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.211022   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.211090   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.211090   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.211090   14012 round_trippers.go:580]     Audit-Id: 585396ea-c0b0-4486-a07d-960cbe7d07ad
	I0624 05:50:31.211090   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:31.211678   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:31.703952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:31.704078   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.704078   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.704185   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.708746   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:31.708847   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Audit-Id: 4b636e50-b83e-4f31-9d83-c36035928e0c
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.708847   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.708847   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.708847   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:31.709952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:31.710119   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.710119   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.710119   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.716371   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:31.716371   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Audit-Id: 212ac423-040e-4d4d-9e69-2bbab8a42c91
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.716371   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.716371   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.716371   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:32.204274   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:32.204373   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.204373   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.204373   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.212050   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:32.212050   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Audit-Id: fb944d1e-24ff-4de9-8e16-aee724f9012d
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.212050   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.212050   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.212050   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:32.213003   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:32.213003   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.213003   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.213003   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.215627   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:32.215627   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.215627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.215627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Audit-Id: 32eb5cc6-dc35-47f0-864e-9c741293901e
	I0624 05:50:32.216631   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:32.703553   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:32.703553   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.703553   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.703553   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.708316   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:32.708602   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.709490   14012 round_trippers.go:580]     Audit-Id: fa89fba0-50c4-4d76-b5b9-594c0467a973
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.709536   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.709536   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.709750   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:32.710575   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:32.710575   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.710575   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.710575   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.715892   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:32.715892   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Audit-Id: 1a5da55d-e4d8-437a-b317-910f8947a8d3
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.716142   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.716142   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.716142   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.716235   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:33.205645   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:33.205645   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.205645   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.205645   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.210119   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:33.210504   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Audit-Id: 7528eb69-c4c8-4edc-bb1a-fd1490daa2e7
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.210504   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.210504   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.210570   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.210570   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:33.211456   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:33.211456   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.211456   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.211456   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.213966   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:33.213966   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Audit-Id: 279fbd76-29dc-44f0-82d0-445d58ce0faf
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.213966   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.214688   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.214688   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.214999   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:33.215531   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:33.703294   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:33.703365   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.703365   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.703365   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.707963   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:33.707963   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Audit-Id: 1d218b8a-7e2b-485b-a44e-b540ca3251b9
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.708419   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.708419   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.708419   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.708419   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:33.709179   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:33.709263   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.709263   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.709263   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.711593   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:33.712503   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Audit-Id: b7aeded1-d61e-4024-ae09-ca03a8185597
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.712560   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.712560   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.712956   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:34.204616   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:34.204616   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.204616   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.204616   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.209245   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:34.209804   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.209804   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.209804   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Audit-Id: ae1cb11a-f417-46f7-be76-a424d38228d1
	I0624 05:50:34.210072   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:34.210925   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:34.210925   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.210925   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.210925   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.220772   14012 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 05:50:34.220844   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.220916   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.220916   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Audit-Id: 1274f919-84d9-4a5a-9faa-d9d19c4b8db4
	I0624 05:50:34.220916   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:34.705307   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:34.705307   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.705307   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.705307   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.708886   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:34.708886   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.708886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Audit-Id: 19854679-820d-4405-93e0-b9d16ac62e84
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.709104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.709335   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:34.709956   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:34.709956   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.709956   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.709956   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.712977   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:34.712977   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Audit-Id: 28fa2005-a22f-4e02-941c-93f9ed318053
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.712977   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.712977   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.713627   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:35.206258   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:35.206258   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.206258   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.206258   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.210868   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:35.211620   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.211620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.211620   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.211702   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.211702   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.211702   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.211758   14012 round_trippers.go:580]     Audit-Id: e590013f-20a3-4b9b-9f9a-b2926e452d17
	I0624 05:50:35.211984   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:35.212741   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:35.212741   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.212741   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.212741   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.215667   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:35.215667   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Audit-Id: e44512ae-24b5-4187-afcd-5a45424ee18c
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.216582   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.216582   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.216582   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.217062   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:35.217700   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:35.706689   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:35.706775   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.706775   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.706775   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.712902   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:35.713497   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.713497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.713497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Audit-Id: 29f3114a-2168-4e82-b2de-b05e040628d5
	I0624 05:50:35.713553   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:35.714733   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:35.714763   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.714763   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.714763   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.717745   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:35.717745   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Audit-Id: 89fa1007-484f-41ec-b73d-5070001985d6
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.717745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.717745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.717745   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:36.206229   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:36.206229   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.206229   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.206229   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.209811   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:36.209811   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.209811   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.210814   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.210814   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.210840   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.210840   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.210840   14012 round_trippers.go:580]     Audit-Id: 710b5fd5-30af-417a-96f6-6d4fce0cc144
	I0624 05:50:36.211048   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:36.211874   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:36.211982   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.211982   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.212056   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.215511   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:36.215511   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Audit-Id: 4e3c82de-95a4-4378-a597-a4de8b7c0869
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.215511   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.215511   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.215511   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:36.707883   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:36.707934   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.707934   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.707934   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.712534   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:36.712777   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Audit-Id: ad8f8c9d-041b-447e-88e3-10a93e4ff54c
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.712777   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.712777   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.712900   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:36.713636   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:36.713801   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.713801   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.713801   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.717040   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:36.717235   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.717337   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.717405   14012 round_trippers.go:580]     Audit-Id: cbc4045c-2eee-4688-8de2-9c13ceb5c546
	I0624 05:50:36.717568   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.717656   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.717734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.717734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.718060   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.208216   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:37.208216   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.208216   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.208334   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.212617   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:37.212719   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.212719   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.212719   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Audit-Id: 09124a77-fa51-4249-b4be-b8853c515223
	I0624 05:50:37.212982   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:37.213828   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:37.213828   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.213926   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.213926   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.215962   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:37.215962   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Audit-Id: da14a930-243f-4097-a70f-84a0fd683211
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.215962   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.216456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.216456   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.216831   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.708738   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:37.709155   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.709155   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.709155   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.712965   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:37.712965   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Audit-Id: ee670f3b-eb92-4c78-b8b0-5a3567c773f9
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.713835   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.713835   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.714113   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:37.714797   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:37.714868   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.714868   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.714868   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.717183   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:37.717183   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.717757   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.717757   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Audit-Id: d150471b-3aee-4bca-81a6-4510945efa23
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.718450   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.719393   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:38.198152   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:38.198152   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.198287   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.198287   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.202550   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:38.202550   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Audit-Id: fbe767cb-dde8-4e58-bde4-1d433ffbc7e3
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.202550   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.202733   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.202733   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.204028   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:38.204853   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:38.204937   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.204937   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.204937   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.207899   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:38.208305   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.208305   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Audit-Id: 4235aa7d-71ca-4eea-a40c-75a82628484e
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.208305   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.208305   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:38.699443   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:38.699515   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.699515   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.699515   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.703956   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:38.703956   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.703956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Audit-Id: ce85e556-36f9-4c50-a361-927f8c860ef5
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.703956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.704514   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:38.705414   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:38.705526   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.705526   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.705526   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.708784   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:38.708784   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.708784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.708784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Audit-Id: 5ca3fc1c-9fc4-4f5b-aaed-8d33c9dcfb12
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.710233   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:39.200289   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:39.200289   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.200289   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.200289   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.203343   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:39.203343   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.203343   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.203343   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Audit-Id: 27f09fe9-1278-49a9-bd93-f2479893009e
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.204766   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:39.205690   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:39.205690   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.205690   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.205800   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.208864   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:39.209737   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.209876   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.209876   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Audit-Id: ffab95d1-a6a0-4c5a-970f-45c4796da043
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.210236   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.210468   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:39.700485   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:39.700485   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.700485   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.700485   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.704102   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:39.704102   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Audit-Id: 9866506b-b0de-48b4-8537-749774e85c66
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.704998   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.704998   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.704998   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.705221   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:39.705952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:39.706016   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.706016   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.706016   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.708469   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:39.709224   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.709224   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.709361   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.709487   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.709556   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.709609   14012 round_trippers.go:580]     Audit-Id: 0bd168a9-9a43-4686-87f5-65031b4d49d8
	I0624 05:50:39.709609   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.709609   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:40.202101   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:40.202101   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.202101   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.202101   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.205697   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.205697   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.205697   14012 round_trippers.go:580]     Audit-Id: e3af3f2b-a70a-4174-9597-a6750bf84e46
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.206525   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.206525   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.206725   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:40.207638   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:40.207638   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.207638   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.207638   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.209570   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:50:40.209570   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.209570   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Audit-Id: 584911ba-6f06-46c1-8580-58d67b06ced1
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.210482   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.210482   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.210734   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:40.210802   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:40.702952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:40.703219   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.703219   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.703219   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.707017   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.707017   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.707017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.707017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Audit-Id: 135a95cf-2709-4fcc-83fb-099ce4a1348c
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.707656   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:40.707860   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:40.707860   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.708442   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.708442   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.712055   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.712055   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.712496   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.712496   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Audit-Id: f09dca58-aad2-4c4a-8412-4c7dcf6d84ea
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.712763   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:41.204810   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:41.204810   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.204810   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.204810   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.208421   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:41.209322   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.209435   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Audit-Id: 80b0575b-3b2b-4cfb-9e5c-6d51bff348e7
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.209458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.209675   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:41.210592   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:41.210702   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.210702   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.210702   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.212980   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:41.212980   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Audit-Id: 46187cdc-a9a0-46b4-b980-affdc2ac6c93
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.213879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.213879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.214028   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:41.701136   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:41.701308   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.701308   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.701308   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.705705   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:41.705705   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.705705   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.705926   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Audit-Id: 27310dbc-f40b-461b-b82a-61f3a4db8778
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.706010   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:41.706854   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:41.706917   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.706917   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.706917   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.709606   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:41.710316   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.710316   14012 round_trippers.go:580]     Audit-Id: 7fb44fe7-126b-4811-ae10-63715e7b6705
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.710396   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.710396   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.710396   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:42.200824   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:42.200824   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.200824   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.200909   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.204104   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.204104   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Audit-Id: f95af931-0962-4019-8a34-b8dfe825ec27
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.204104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.204104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.205430   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:42.206384   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:42.206483   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.206483   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.206483   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.209836   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.209836   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.209836   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Audit-Id: 5efd11b1-6e20-43ea-9301-58346c266c6d
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.209836   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.210647   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:42.211106   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:42.701897   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:42.701978   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.701978   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.701978   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.705406   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.705406   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Audit-Id: ebc7bf6c-cdba-41a4-b8eb-c905c93c54f2
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.706160   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.706160   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.706436   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:42.706778   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:42.706778   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.706778   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.706778   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.715514   14012 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 05:50:42.716402   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Audit-Id: e204d810-5631-4abc-b839-680590d1f034
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.716402   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.716402   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.716985   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:43.201070   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:43.201146   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.201146   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.201146   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.205426   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:43.205426   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Audit-Id: dab85ba5-bd04-44f6-9788-a99ae6687789
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.205754   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.205754   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.206079   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:43.207024   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:43.207087   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.207087   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.207087   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.209410   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:43.209410   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Audit-Id: 3f85bbdc-ec45-45a5-a97d-18cbf30e73bf
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.209410   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.209410   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.210790   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:43.702606   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:43.702606   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.702606   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.702606   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.707213   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:43.707213   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.707302   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.707302   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Audit-Id: a8cae300-77f7-44ad-9db0-71a6de5c326c
	I0624 05:50:43.708225   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:43.708417   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:43.708417   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.708417   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.708417   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.712061   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:43.712061   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.712061   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Audit-Id: 6c55b8f8-0514-4750-8e48-2fc390a39b24
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.712204   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.712458   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:44.203845   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:44.203845   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.203845   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.203845   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.207425   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:44.207512   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.207512   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.207512   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Audit-Id: b8ef4fbf-2d35-4f10-8316-27065d9db5eb
	I0624 05:50:44.207695   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:44.208587   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:44.208587   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.208659   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.208659   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.211599   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.211791   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.211791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Audit-Id: 7961e8fa-5329-4b0c-9f6e-20630bb4aa77
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.211791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.212673   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:44.213164   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:44.703088   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:44.703349   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.703349   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.703349   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.705789   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.705789   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.705789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.705789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Audit-Id: 29fb7f5b-8a90-43b5-a0ed-99defd64dcac
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.707214   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:44.707992   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:44.707992   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.707992   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.707992   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.710576   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.710576   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.710576   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.710576   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Audit-Id: 69e4d8ec-200c-45fb-8ac0-dabb9af5b0a4
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.711275   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.711658   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:45.199946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:45.200036   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.200036   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.200036   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.204965   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:45.205294   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Audit-Id: 3d6a5403-10c7-4ace-b7a2-b7779ee91153
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.205294   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.205294   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.206275   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:45.206988   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:45.206988   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.206988   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.206988   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.208595   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:50:45.209746   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.209746   14012 round_trippers.go:580]     Audit-Id: dcc81d8e-c448-45bd-9026-32ef5256d02a
	I0624 05:50:45.209746   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.209810   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.209810   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.209810   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.209810   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.210223   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:45.697743   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:45.697999   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.697999   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.697999   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.701347   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:45.701347   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.701347   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.701347   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Audit-Id: 64a5cc30-842b-4df1-bc50-af1c5a5658e9
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.703060   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:45.703987   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:45.704048   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.704105   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.704105   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.707104   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:45.707104   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Audit-Id: 17d05764-09c2-466e-84d1-8807d124a4d3
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.707104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.707104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.708862   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.199612   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:46.199612   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.199612   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.199612   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.203191   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:46.203191   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Audit-Id: 6d38495c-1595-42a2-9d0a-45a51ece0e96
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.203191   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.203956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.203956   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.204296   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:46.205193   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:46.205238   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.205238   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.205238   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.207807   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:46.207807   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.207807   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Audit-Id: db5a7b83-4c8b-4cd7-8c5a-25ff629ad507
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.207807   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.209301   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.698687   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:46.698758   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.698758   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.698758   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.703008   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:46.703008   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Audit-Id: 267c13d1-5975-4d40-9cec-ed87f9a99293
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.703008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.703146   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.703146   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.703358   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:46.704101   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:46.704101   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.704192   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.704192   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.709136   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:46.709732   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.709732   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Audit-Id: e7701e6f-c8f2-4e63-98fd-4ba86b63b7b4
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.709732   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.709879   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.710580   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:47.201604   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:47.201604   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.201604   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.201777   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.206181   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:47.206355   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Audit-Id: 37f96e35-2021-418e-a347-7dd4a96c0724
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.206355   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.206355   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.206653   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:47.207425   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:47.207425   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.207425   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.207425   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.213077   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:47.213077   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.213077   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.213077   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Audit-Id: 9e737c44-23db-40d5-bab1-401986426d75
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.213077   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:47.699238   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:47.699278   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.699278   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.699278   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.702898   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:47.702898   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.703845   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.703845   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.703886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.703886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.703886   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.703886   14012 round_trippers.go:580]     Audit-Id: b8c95419-2597-4b55-a78e-72f849be61c6
	I0624 05:50:47.704099   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:47.704759   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:47.704759   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.704759   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.704759   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.706798   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:47.707849   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Audit-Id: fe8adc43-07a7-4da0-94df-74cdfbd9687a
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.707849   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.707849   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.708228   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.204359   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:48.204431   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.204431   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.204431   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.208385   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:48.208385   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.208385   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.208385   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Audit-Id: ef598147-1fcf-4bda-85ab-0c10cd9fd175
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.208871   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:48.209967   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:48.209967   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.209967   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.210032   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.214255   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:48.214405   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.214405   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Audit-Id: 9568c26a-2a32-4085-8908-e71a0179feb3
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.214405   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.214911   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.696805   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:48.696901   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.696901   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.696901   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.702757   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:48.702853   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.702853   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.702853   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Audit-Id: b1bcac8d-f350-47f9-83a4-bbcd7b6e1a59
	I0624 05:50:48.703038   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:48.703927   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:48.703927   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.703927   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.703927   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.708858   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:48.709789   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Audit-Id: 3c325530-0a95-493e-8c6d-2a4015f5766d
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.709789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.709789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.709859   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.711154   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.711620   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:49.203832   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:49.204012   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.204012   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.204082   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.209540   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:49.210435   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.210435   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Audit-Id: df743d75-5896-4d8d-ae9f-a629513f97d2
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.210509   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.210768   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:49.211541   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:49.211600   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.211600   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.211600   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.214276   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:49.214276   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Audit-Id: 14573cd5-79aa-4ce0-bab3-200d2ccd6c2a
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.214276   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.214276   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.215242   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:49.704933   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:49.704933   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.705000   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.705000   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.720000   14012 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 05:50:49.720263   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.720263   14012 round_trippers.go:580]     Audit-Id: b6f0ff52-d323-4fef-ab68-7082b5ce5f06
	I0624 05:50:49.720263   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.720364   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.720364   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.720364   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.720364   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.720577   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1952","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0624 05:50:49.721169   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:49.721169   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.721169   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.721169   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.725921   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:49.725921   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Audit-Id: 509f524e-1cc2-4b71-9a15-bb37cdfb2532
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.725921   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.725921   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.726464   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.726967   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.208274   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:50.208274   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.208274   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.208274   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.211874   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.211874   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.211874   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.211874   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.211874   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Audit-Id: 6875be7c-8d78-47b1-8fd6-ede70aed85ee
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.213149   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0624 05:50:50.214241   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.214241   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.214241   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.214241   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.217093   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.217093   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.217093   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.217345   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.217345   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Audit-Id: 49c6a5a4-cd7e-4780-8b5e-1466a5d80688
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.217510   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.217510   14012 pod_ready.go:92] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.218052   14012 pod_ready.go:81] duration metric: took 25.5230519s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.218052   14012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.218217   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:50:50.218217   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.218217   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.218217   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.222411   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:50.222547   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.222547   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.222547   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.222606   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.222606   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.222606   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.222626   14012 round_trippers.go:580]     Audit-Id: da8ab028-99ed-49a4-b0e6-0f810bf7c8de
	I0624 05:50:50.222842   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1853","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0624 05:50:50.223405   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.223523   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.223523   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.223523   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.227168   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.227168   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Audit-Id: 5c1b6e9e-798b-45f4-82bd-71c0bf1da5bc
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.227168   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.227168   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.227168   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.227917   14012 pod_ready.go:92] pod "etcd-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.227917   14012 pod_ready.go:81] duration metric: took 9.8651ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.227917   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.227917   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:50:50.227917   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.227917   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.227917   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.230491   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.230491   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Audit-Id: cf0fb134-b92b-40e0-b6fe-da7f623af6d8
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.230491   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.230491   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.231030   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a1504b-2338-458c-b448-92e8836b479a","resourceVersion":"1846","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.217.139:8443","kubernetes.io/config.hash":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.mirror":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.seen":"2024-06-24T12:49:37.772966703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0624 05:50:50.231643   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.231734   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.231734   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.231734   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.234071   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.234559   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.234559   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.234559   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.234559   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.234559   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.234613   14012 round_trippers.go:580]     Audit-Id: df3ad430-1866-42b0-8bfd-d801319ce2e5
	I0624 05:50:50.234613   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.234647   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.234647   14012 pod_ready.go:92] pod "kube-apiserver-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.235250   14012 pod_ready.go:81] duration metric: took 7.3325ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.235250   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.235444   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:50:50.235509   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.235509   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.235509   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.238315   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.238315   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.238315   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.238315   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.238315   14012 round_trippers.go:580]     Audit-Id: af10861d-392a-4f44-b4b8-286e7c1e4cda
	I0624 05:50:50.238713   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.238713   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.238713   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.238816   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"1858","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0624 05:50:50.239620   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.239620   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.239620   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.239729   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.242415   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.242415   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Audit-Id: 6fe68ddc-c0bc-4307-8fac-49c1f78e2bef
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.242415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.242415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.242780   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.243319   14012 pod_ready.go:92] pod "kube-controller-manager-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.243429   14012 pod_ready.go:81] duration metric: took 8.1145ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.243490   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.243618   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:50:50.243664   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.243664   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.243664   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.247358   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.247494   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.247494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.247494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Audit-Id: 2f68711f-b479-4a0f-b39a-045b1c99f7b5
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.247803   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hjjs8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e148504-3300-4591-9576-7c5597851f41","resourceVersion":"1939","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:50:50.247803   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:50:50.248331   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.248331   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.248331   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.250376   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.250376   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.250376   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Audit-Id: ec0fd1fa-fcfd-49b0-a0f5-eeea8ac968a3
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.251017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.251017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.251235   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"1943","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0624 05:50:50.251704   14012 pod_ready.go:97] node "multinode-876600-m02" hosting pod "kube-proxy-hjjs8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m02" has status "Ready":"Unknown"
	I0624 05:50:50.251704   14012 pod_ready.go:81] duration metric: took 8.2144ms for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	E0624 05:50:50.251704   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m02" hosting pod "kube-proxy-hjjs8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m02" has status "Ready":"Unknown"
	I0624 05:50:50.251795   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.412696   14012 request.go:629] Waited for 160.6528ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:50:50.412899   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:50:50.412899   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.412899   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.413024   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.420711   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:50.420711   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Audit-Id: 299d6a0e-4928-45ca-ba8b-ac6502375d69
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.420711   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.420711   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.421674   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"1835","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0624 05:50:50.617508   14012 request.go:629] Waited for 194.9795ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.617694   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.617694   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.617694   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.617694   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.622257   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:50.622484   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.622484   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.622484   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.622484   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Audit-Id: 1aa639b0-062e-4be3-b537-db1e3604ea22
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.622864   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.623554   14012 pod_ready.go:92] pod "kube-proxy-lcc9v" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.623554   14012 pod_ready.go:81] duration metric: took 371.758ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.623554   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.821681   14012 request.go:629] Waited for 197.8096ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:50:50.821946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:50:50.821946   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.821946   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.821946   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.825504   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.826314   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Audit-Id: e1b8c870-5a55-4a8c-9b00-1fc656c01133
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.826314   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.826314   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.826595   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf7jm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4f99ace-bf94-40d8-b28f-27ec938418ef","resourceVersion":"1727","creationTimestamp":"2024-06-24T12:34:19Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:50:51.009270   14012 request.go:629] Waited for 181.7474ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:50:51.009373   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:50:51.009373   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.009373   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.009373   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.013220   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:51.014236   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Audit-Id: dca87f8d-5b45-4ca4-8340-ac8714659904
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.014279   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.014279   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.014279   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:51.014706   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m03","uid":"1392cc6a-2e48-4bde-9120-b3d99174bf99","resourceVersion":"1891","creationTimestamp":"2024-06-24T12:45:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_45_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0624 05:50:51.014706   14012 pod_ready.go:97] node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:50:51.015284   14012 pod_ready.go:81] duration metric: took 391.6036ms for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	E0624 05:50:51.015284   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:50:51.015499   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:51.213376   14012 request.go:629] Waited for 197.8157ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:50:51.213549   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:50:51.213651   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.213651   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.213742   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.218086   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:51.218684   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:51 GMT
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Audit-Id: b9b097ad-d339-43e2-86b9-d986d6804896
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.218684   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.218684   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.218868   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"1848","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0624 05:50:51.417429   14012 request.go:629] Waited for 197.8367ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:51.417851   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:51.417851   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.417851   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.417851   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.420821   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:51.421494   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Audit-Id: 5c4da465-9bef-4803-b32c-e3eb42b083cd
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.421494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.421494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:51 GMT
	I0624 05:50:51.421757   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:51.422416   14012 pod_ready.go:92] pod "kube-scheduler-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:51.422466   14012 pod_ready.go:81] duration metric: took 406.9049ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:51.422557   14012 pod_ready.go:38] duration metric: took 26.740062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:50:51.422557   14012 api_server.go:52] waiting for apiserver process to appear ...
	I0624 05:50:51.432044   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:51.458761   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:51.458761   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:51.467978   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:51.496085   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:51.496085   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:51.504069   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:51.527791   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:51.527791   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:51.527915   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:51.536556   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:51.557989   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:51.557989   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:51.557989   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:51.567037   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:51.588414   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:51.588414   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:51.588414   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:51.596415   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:51.620411   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:51.620411   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:51.620411   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:51.628442   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:51.651409   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:51.651409   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:51.652620   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:51.652620   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:51.652712   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:51.884183   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:51.884183   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:51.884183   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.884183   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:51.884183   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:51.884183   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.884183   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.884183   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.884183   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:44 +0000
	I0624 05:50:51.884183   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.884183   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:51.884739   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:51.884739   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:51.884739   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:51.884739   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:51.884860   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:51.884860   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.884959   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:51.885033   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:51.885033   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.885076   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.885076   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.885076   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.885076   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.885076   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.885076   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.885076   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.885199   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:51.885199   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.885199   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.885344   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:51.885344   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:51.885344   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:51.885384   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.885409   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.885409   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:51.885409   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0624 05:50:51.885409   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0624 05:50:51.885477   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0624 05:50:51.885477   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0624 05:50:51.885548   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.885648   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:51.885648   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:51.885648   14012 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0624 05:50:51.885715   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0624 05:50:51.885715   14012 command_runner.go:130] > Events:
	I0624 05:50:51.885715   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:51.885715   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:51.885715   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.885852   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:51.885880   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:51.885913   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:51.885913   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:51.886012   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:51.886012   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:51.886079   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.886106   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.886106   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.886138   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:51.886187   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.886187   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.886218   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.886218   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:51.886218   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:51.886218   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.886218   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:51.886218   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.886218   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:51.886218   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:51.886218   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:51.886218   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.886218   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.886218   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.886218   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.886743   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:51.886743   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.886743   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.886928   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:51.886928   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:51.886928   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:51.886928   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.886992   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:51.886992   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0624 05:50:51.886992   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0624 05:50:51.886992   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:51.886992   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:51.886992   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:51.886992   14012 command_runner.go:130] > Events:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:51.886992   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:51.886992   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.886992   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.886992   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:51.886992   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:51.886992   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.886992   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.886992   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:51.886992   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.887571   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:51.887571   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:51.887844   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.887844   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:51.887844   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:51.887844   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.887844   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.887844   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.887844   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.887844   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.887844   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.888382   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.888459   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.888459   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.888459   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.888523   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.888557   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.888557   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:51.888603   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.888603   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.888696   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.888696   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.888765   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.888809   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.888809   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.888809   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:51.888881   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:51.888881   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:51.888881   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.888881   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.889018   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0624 05:50:51.889018   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0624 05:50:51.889080   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.889122   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.889122   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:51.889122   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:51.889122   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:51.889242   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:51.889242   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:51.889287   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:51.889287   14012 command_runner.go:130] > Events:
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:51.889287   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  Starting                 5m35s                  kube-proxy       
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.889828   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:51.889828   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  RegisteredNode           5m36s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:51.900471   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:51.900471   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:51.935016   14012 command_runner.go:130] > .:53
	I0624 05:50:51.935016   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:51.935016   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:51.935016   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:51.935016   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:51.935016   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:50:51.935016   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:50:51.964706   14012 command_runner.go:130] > .:53
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:51.964706   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:51.964706   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:50:51.965358   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:50:51.965516   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:50:51.965552   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:50:51.965673   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:50:51.965698   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:50:51.965698   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:50:51.965760   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:50:51.965760   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:50:51.965790   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:50:51.965790   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:50:51.965827   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:50:51.968543   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:51.968543   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:51.996333   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:51.996458   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:51.996530   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:51.996530   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:51.998850   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:51.998850   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:52.042210   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042564   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042606   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042630   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042630   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042812   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042835   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042835   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044112   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044361   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044392   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044392   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045857   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045857   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045922   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045922   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045970   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046027   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046027   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046062   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:52.046062   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046092   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046092   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:52.046198   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046198   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046366   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046366   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046414   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:52.046547   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046547   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046572   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046609   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046625   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046625   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046756   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046756   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:52.046839   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046910   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:52.046934   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046934   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047600   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:52.047600   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047770   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047770   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048033   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048254   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048352   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048352   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048780   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048823   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048823   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048916   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.066982   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:52.067976   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:52.103673   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:52.104099   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:52.104122   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:52.104265   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.104531   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:52.104897   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:52.104897   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:52.104933   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:52.104964   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:52.104964   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:52.105247   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:52.105307   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:52.105347   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:52.105390   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:52.105390   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:52.105522   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:52.105522   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:52.105733   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:52.105733   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:52.105795   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.105795   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:52.105828   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:52.105828   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.106373   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.106425   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:52.106425   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:52.106492   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:52.106540   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:52.106540   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:52.106733   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:52.106957   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:52.106980   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:52.106980   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:52.107072   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:52.107886   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:52.107927   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:52.107927   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.108685   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.108855   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:52.108881   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:52.108881   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:52.109025   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:52.109310   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.128256   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:52.128256   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:52.193315   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:52.193315   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:52.193315   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:52.193315   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:52.193315   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:52.193315   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:52.193315   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:52.193315   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:52.193315   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:52.193844   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:52.193889   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:52.193952   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:52.194007   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:52.194069   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:52.194144   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:52.194144   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:52.194194   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:52.196600   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:50:52.196600   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:50:52.229502   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.230216   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:52.230324   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.230324   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:52.230387   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.232678   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:52.232743   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:52.265642   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.265642   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:52.265642   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.265880   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:52.265880   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:52.265880   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:52.265956   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.265956   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.266023   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:52.266041   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.266041   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.266041   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.266109   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.266176   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266250   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266287   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.266356   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.266356   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.266425   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.266425   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266492   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266492   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.266561   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.266622   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.266622   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.266702   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266702   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266788   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266788   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.266948   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.267010   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.267010   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.267112   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267161   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267188   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267227   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267473   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.267652   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.267652   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267726   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267794   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267884   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267918   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.267963   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.267963   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.268042   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.268042   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.268104   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.268129   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268314   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.268338   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.268368   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.268408   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.272855   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.282341   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:52.282341   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:52.310703   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:52.310795   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:52.310838   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:52.310838   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:52.315988   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:50:52.315988   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:50:52.344643   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:50:52.345508   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:50:52.345508   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345744   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:50:52.345744   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345815   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345849   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:52.345873   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345873   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.348640   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:52.348640   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:52.426900   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:50:52.426900   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:50:52.447923   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:50:52.450901   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:50:52.450901   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:50:52.480899   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:50:52.480899   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:50:52.481184   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.481384   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:50:52.481453   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:52.481453   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:50:52.481526   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:50:52.481603   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:50:52.481603   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:50:52.483042   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:50:52.496815   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:52.496815   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:52.525250   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:52.525982   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:52.526037   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:52.526066   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:52.526097   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:52.526126   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:52.526161   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:52.526161   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:52.534097   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:50:52.534097   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:52.564670   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:50:52.581683   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:52.581683   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.613465   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.613465   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.613512   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.613512   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.613560   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.613622   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613622   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613665   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614221   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614221   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614360   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614382   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614471   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:52.614641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:52.614641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615551   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615684   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615725   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615725   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:52.615992   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:52.616055   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:52.616116   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:52.616145   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.616210   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:52.616249   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:52.616249   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:52.616327   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:52.616327   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:52.616424   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:52.616476   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:52.616476   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:52.616512   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:52.616512   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:52.616551   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:52.616551   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:52.616648   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:52.616671   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.617226   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.617226   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:52.617353   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:52.617385   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:52.617385   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617984   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617984   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618382   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:52.618382   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.620288   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:52.620462   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:52.620494   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620717   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620809   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620809   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620973   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621005   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621061   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621061   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621148   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621174   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621869   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621894   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621948   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621948   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622143   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622143   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622218   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622218   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622272   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.622272   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622347   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622372   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622419   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622436   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:50:52.622479   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.623166   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623195   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:55.165524   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:50:55.195904   14012 command_runner.go:130] > 1846
	I0624 05:50:55.195904   14012 api_server.go:72] duration metric: took 1m6.8294375s to wait for apiserver process to appear ...
	I0624 05:50:55.195904   14012 api_server.go:88] waiting for apiserver healthz status ...
	I0624 05:50:55.206294   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:55.230775   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:55.231779   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:55.241709   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:55.267588   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:55.268429   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:55.278463   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:55.301966   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:55.302295   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:55.302295   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:55.312228   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:55.338292   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:55.338292   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:55.338292   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:55.348214   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:55.375100   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:55.375100   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:55.375100   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:55.386326   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:55.413476   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:55.413476   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:55.414654   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:55.424594   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:55.451675   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:55.451675   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:55.452023   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:55.452089   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:50:55.452163   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:50:55.481047   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:50:55.481736   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:50:55.481736   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:50:55.481773   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:50:55.481773   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:50:55.481773   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:50:55.481867   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:50:55.481867   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:50:55.481867   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:50:55.482035   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:50:55.482035   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:50:55.482085   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:50:55.482085   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:50:55.482121   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:50:55.482121   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:50:55.482152   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:50:55.482152   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:50:55.484212   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:55.484303   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:55.513413   14012 command_runner.go:130] > .:53
	I0624 05:50:55.513478   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:55.513478   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:55.513541   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:55.513541   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:55.514343   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:55.514411   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:55.551536   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:55.552266   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:55.552330   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:55.552330   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:55.552398   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:55.552463   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:55.552463   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:55.552529   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.552529   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:55.552609   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:55.552760   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:55.552808   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:55.552859   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:55.552859   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:55.559556   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:55.559613   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592259   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592259   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:55.592390   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:55.592477   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:55.592477   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592984   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592984   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:55.593228   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:55.593457   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:55.593764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:55.593764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:55.593872   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.593945   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594003   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:55.594536   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:55.594827   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:55.595146   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:55.595224   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:55.595287   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:55.595287   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:55.595331   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:55.595701   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:55.595701   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:55.595806   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.595806   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.595918   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.595918   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:55.596406   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:55.596406   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:55.596524   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:55.596524   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:55.596634   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:55.596634   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:55.596861   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:55.596861   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:55.596974   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:55.597085   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:55.597085   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:55.597197   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597197   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597299   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597410   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.597410   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597521   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597630   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597630   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:55.597733   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.597733   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597843   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597957   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:55.597957   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:55.598067   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.598067   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.598181   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:55.598181   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.598291   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.598291   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598403   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598403   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598515   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598571   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:55.598609   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:55.598701   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598768   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598859   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598859   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.599048   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:55.599104   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.599162   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.599252   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.599303   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:55.599424   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:55.599462   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:55.599509   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:55.599566   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:55.599618   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:55.600274   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.600371   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:55.600484   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601079   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601130   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.601130   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.601827   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.601898   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601898   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602018   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.602156   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.602213   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602292   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602408   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602477   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602649   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602705   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.602757   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.603462   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604841   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604841   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:55.605919   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:55.647121   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:55.647121   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:55.683233   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.683550   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:55.683550   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.683722   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:55.683778   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:55.683865   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:55.684438   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:55.684438   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:55.684512   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:55.684607   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:55.684695   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:55.684779   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:55.684852   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:55.684885   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:55.684911   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:55.685002   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:55.685048   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:55.685048   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:55.685283   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:55.685283   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.686479   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:55.686479   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:55.686554   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:55.686554   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:55.686595   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:55.686595   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.687902   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:55.687902   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:55.689458   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:55.689458   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:55.689898   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:55.689898   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.710689   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:50:55.710725   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:50:55.749020   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:50:55.749824   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:50:55.749990   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750349   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750349   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750427   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750427   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750673   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750673   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.752437   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:55.752437   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:55.783988   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:55.784416   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:55.784517   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:55.784517   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:55.785378   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:55.785378   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:55.785538   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:55.785538   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:55.785611   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:55.785611   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:55.785667   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:55.785667   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:55.785760   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:55.785806   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:55.792160   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:50:55.792328   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:50:55.821056   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:55.821850   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.824408   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:55.824476   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:55.860513   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.860513   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:55.862287   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.862287   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.862287   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.862357   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.862357   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862433   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862433   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.862577   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.862725   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.862725   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.862790   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862790   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862864   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862864   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863025   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863025   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863187   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863187   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863231   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.863287   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.863287   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.863481   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863481   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863525   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863525   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863602   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863631   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863631   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863680   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863680   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.863738   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.863738   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863802   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863802   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863888   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863888   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.863959   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.863959   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.864036   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.864168   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.864168   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.864316   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.878075   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:50:55.878075   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:50:55.927396   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.927716   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:55.927716   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.928028   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:55.928095   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.928139   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:55.928265   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:55.928405   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:55.928405   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:55.928676   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:55.928747   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:55.928747   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:55.928803   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:55.928803   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:55.928863   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:55.928863   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:55.929070   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:55.930021   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:55.930021   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:55.930234   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:55.930234   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.930446   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:55.930651   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.931678   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931836   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931836   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931897   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931917   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:55.931967   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:55.931967   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:55.932186   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:55.932186   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:55.932186   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:55.934780   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:55.934828   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:55.935176   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:55.935176   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:55.935223   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:55.935223   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:55.935262   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:55.935262   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:55.935412   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:55.935450   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:55.935450   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:55.935483   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:55.935483   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:55.935525   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:55.935525   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:55.935620   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.935620   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.935725   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:55.935725   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:55.935850   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:55.935850   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:55.936000   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:55.936033   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:55.936069   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:55.936724   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:55.936724   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.937021   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:55.937021   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.937067   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937109   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:55.937156   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:55.937197   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937197   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.937242   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937242   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:55.937281   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:55.937382   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:55.937382   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:55.937468   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:55.937512   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:55.937512   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:55.937594   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:55.937633   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:55.937633   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:55.937759   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:55.937798   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:55.937798   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:50:55.957698   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:55.957698   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:55.989949   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:55.990666   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992371   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:55.995485   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996253   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997237   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997805   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:55.997805   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997842   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997842   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.997873   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.997913   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998180   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998218   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998218   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999018   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999018   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:56.015895   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:50:56.015895   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:50:56.049247   14012 command_runner.go:130] > .:53
	I0624 05:50:56.049450   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:56.049450   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:56.049450   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:50:56.049635   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:50:56.049635   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:50:56.049798   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:50:56.049798   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:50:56.049849   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:50:56.049849   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:50:56.049885   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:50:56.049885   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:50:56.049997   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:50:56.050025   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:50:56.054740   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:50:56.054848   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:50:56.083943   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:50:56.083943   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:50:56.084947   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:56.085100   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:50:56.085165   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:56.085212   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:50:56.085256   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:50:56.085315   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:50:56.085315   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:50:56.085377   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085377   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:50:56.085415   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085415   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085453   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085453   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:50:56.085489   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085489   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085545   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:50:56.085545   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085588   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:50:56.085588   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:50:56.085588   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:50:56.085649   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:50:56.085649   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:50:56.085649   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:50:56.085715   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:50:56.085715   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085756   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085756   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:50:56.085793   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085793   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085793   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:50:56.085793   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:50:56.085879   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085879   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085879   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086031   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:50:56.086079   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086079   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086079   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:50:56.086159   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086159   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086214   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:50:56.086214   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:50:56.086258   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:50:56.086258   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:50:56.086368   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:50:56.086434   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086434   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:56.086434   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:50:56.086621   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:50:56.086669   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:50:56.086669   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:50:56.086808   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:50:56.086849   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:50:56.086923   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:50:56.086923   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:50:56.087002   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:50:56.087002   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:50:56.087067   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:50:56.087104   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:50:56.087104   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:50:56.087144   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:50:56.096168   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:56.096168   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:56.126053   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:56.126122   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:56.126122   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:56.126506   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:56.126506   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:56.128317   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:56.128414   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:56.161387   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161387   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.162133   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162159   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:56.162373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162482   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163274   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163274   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163324   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163364   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163480   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163480   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163525   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163525   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163581   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163581   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163624   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163624   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163676   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:56.163676   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163717   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163717   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:56.163760   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:56.163820   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:56.163820   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:56.163917   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:56.163975   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163975   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:56.164210   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:56.164210   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:56.164270   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:56.164270   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:56.164676   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:56.164676   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:56.164720   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164755   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.164797   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164843   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.164843   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164891   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:56.164891   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164937   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164977   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164977   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.165041   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.165041   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.165109   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:56.165109   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165737   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165737   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165889   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:56.165927   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:56.166553   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:56.166553   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:56.166775   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166981   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166981   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167032   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168522   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168522   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168672   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168726   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168726   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168773   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168773   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.168850   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:50:56.168850   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168902   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168902   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168971   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.169008   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.169033   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.169033   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.169122   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.198252   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:56.198252   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:56.269890   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:56.270158   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:56.270158   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:56.270158   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:56.270281   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:56.270326   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:56.270326   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:56.270326   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:56.270415   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:56.270415   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:56.270500   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:56.270500   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:56.270540   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:56.270540   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:56.270540   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:56.270629   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:56.270629   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:56.273020   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:56.273020   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:56.492041   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:56.492107   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:56.492107   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.492107   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.492107   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:56.492282   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.492282   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.492282   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.492330   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:56.492330   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:56.492330   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.492389   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.492389   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:56.492389   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.492435   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:55 +0000
	I0624 05:50:56.492435   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.492435   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:56.492485   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:56.492485   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:56.492539   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:56.492539   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:56.492588   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:56.492588   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.492635   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:56.492635   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:56.492635   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.492635   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.492684   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.492684   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.492684   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.492684   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.492684   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.492730   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.492730   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.492771   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.492771   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.492771   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.492771   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.492771   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:56.492771   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:56.492817   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:56.492909   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.492909   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.493015   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:56.493015   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:56.493015   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:56.493015   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.493098   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.493098   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:56.493098   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         73s
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:56.493310   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.493310   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.493310   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:56.493310   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:56.493354   14012 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0624 05:50:56.493408   14012 command_runner.go:130] > Events:
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:56.493408   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:56.493642   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:56.493642   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.493642   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.493642   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:56.493642   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:56.493642   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.493642   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.493642   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:56.493642   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:56.493642   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.493642   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.493642   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.493642   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.494182   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.494182   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.494182   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:56.494182   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:56.494247   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:56.494247   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.494247   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.494365   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.494406   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:56.494406   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:56.494479   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:56.494479   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.494525   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.494525   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:56.494565   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0624 05:50:56.494565   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0624 05:50:56.494609   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.494609   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.494609   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:56.494609   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:56.494663   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:56.494663   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:56.494663   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:56.494709   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:56.494709   14012 command_runner.go:130] > Events:
	I0624 05:50:56.494709   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:56.494709   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.494809   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:56.494809   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.494857   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:56.494857   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:56.494901   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:56.494901   14012 command_runner.go:130] >   Normal  NodeNotReady             21s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:56.494952   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:56.494952   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:56.494952   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.494952   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.495097   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.495097   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:56.495097   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:56.495169   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:56.495169   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.495169   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.495169   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:56.495169   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.495169   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:56.495242   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.495242   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:56.495242   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:56.495304   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495492   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.495492   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:56.495492   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:56.495492   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.495530   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.495530   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.495530   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.495530   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.495530   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.495530   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.495593   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.495593   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.495593   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.495593   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.495593   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.495593   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:56.495652   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.495652   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.495714   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:56.495774   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:56.495774   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:56.495845   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.495845   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.495905   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0624 05:50:56.495905   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0624 05:50:56.495905   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.495905   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.495905   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:56.495905   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:56.495905   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:56.495905   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:56.495985   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:56.495985   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:56.495985   14012 command_runner.go:130] > Events:
	I0624 05:50:56.495985   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:56.495985   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  Starting                 5m40s                  kube-proxy       
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.496110   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.496303   14012 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:56.496303   14012 command_runner.go:130] >   Normal  NodeReady                5m36s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:56.496344   14012 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:56.496344   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.010391   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:50:59.019178   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:50:59.019178   14012 round_trippers.go:463] GET https://172.31.217.139:8443/version
	I0624 05:50:59.019178   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:59.019178   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:59.019178   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:59.021643   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:59.021643   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:59.021643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:59.021643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Content-Length: 263
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:59 GMT
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Audit-Id: a34bdbe4-d317-4e0e-988d-97dd2edb80de
	I0624 05:50:59.021643   14012 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0624 05:50:59.021643   14012 api_server.go:141] control plane version: v1.30.2
	I0624 05:50:59.021643   14012 api_server.go:131] duration metric: took 3.8257243s to wait for apiserver health ...
	I0624 05:50:59.021643   14012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 05:50:59.032181   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:59.062863   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:59.062935   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:59.073166   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:59.098295   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:59.098295   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:59.112486   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:59.136316   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:59.136316   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:59.136316   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:59.145312   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:59.170374   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:59.170374   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:59.170374   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:59.179748   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:59.211023   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:59.211023   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:59.211106   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:59.220417   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:59.247808   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:59.247847   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:59.247847   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:59.256586   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:59.280125   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:59.280125   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:59.280125   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:59.280125   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:59.280125   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:59.309443   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:59.309443   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:59.309735   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:59.309777   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:59.309777   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:59.309922   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:59.309922   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:59.309983   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:59.309983   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:59.309983   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:59.310512   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:59.310512   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.330479   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:59.331476   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:59.368509   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:59.368509   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369160   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369446   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369446   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369489   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369489   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:59.369530   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369530   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369557   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:59.369641   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369641   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369882   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370444   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370444   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:59.370539   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370539   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370608   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370608   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370796   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370824   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371021   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371021   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371707   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:59.371707   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371754   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371754   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371927   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:59.371927   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371953   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373245   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:59.373245   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:59.373401   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373573   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:59.373573   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373598   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374519   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374519   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374818   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374818   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374854   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.393432   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:59.393432   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:59.425793   14012 command_runner.go:130] > .:53
	I0624 05:50:59.425793   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:59.425793   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:59.425793   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:59.425793   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:59.425793   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:59.425793   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:59.458148   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:59.460485   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:59.460485   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:59.497453   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:59.497453   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:59.524454   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:59.525104   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:59.527524   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:59.527524   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:59.592602   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:59.592602   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:59.592602   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:59.592602   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         29 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:59.592602   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:59.592602   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:59.592602   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:59.592602   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:59.592602   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:59.592602   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:59.592602   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:59.592602   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:59.595598   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:59.595598   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:59.678758   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:59.678758   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:59.895215   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:59.895215   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:59.895215   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.895215   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:59.895215   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:59.895215   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.895215   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.895215   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.895215   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:55 +0000
	I0624 05:50:59.895215   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.895215   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:59.895215   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:59.895215   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:59.895744   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:59.895744   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:59.895744   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:59.895744   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.895744   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:59.895872   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:59.895872   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.895872   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.895936   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.895936   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.895965   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.895965   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.895965   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.895965   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.896003   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.896003   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.896003   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.896003   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:59.896003   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.896003   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.896003   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:59.896003   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:59.896003   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.896003   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:59.896003   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:59.896003   14012 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0624 05:50:59.896003   14012 command_runner.go:130] > Events:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:59.896682   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:59.896682   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:59.896682   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:59.896682   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.896840   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.896840   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.896961   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.896961   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:59.896961   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:59.896961   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:59.896961   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.897029   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.897029   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:59.897029   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.897029   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:59.897029   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.897096   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:59.897096   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:59.897169   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.897266   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:59.897266   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:59.897289   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.897289   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.897289   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.897318   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.897318   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:59.897318   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.897318   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.897318   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:59.897318   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:59.897318   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.897318   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0624 05:50:59.897318   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0624 05:50:59.897318   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0624 05:50:59.897318   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:59.897318   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:59.897318   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:59.897318   14012 command_runner.go:130] > Events:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:59.897842   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:59.897842   14012 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:59.897842   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:59.897842   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:59.897842   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.897842   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.898048   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.898110   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:59.898133   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:59.898133   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:59.898133   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.898162   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:59.898162   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.898162   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:59.898162   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:59.898162   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:59.898162   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.898162   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.898162   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.898162   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.898162   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:59.898162   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.898162   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.898162   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:59.898162   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:59.898162   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.898694   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.898694   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0624 05:50:59.898757   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0624 05:50:59.898757   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.898757   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:59.898757   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:59.898757   14012 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0624 05:50:59.898849   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0624 05:50:59.898849   14012 command_runner.go:130] > Events:
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:59.898873   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0624 05:50:59.899026   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.899026   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:59.899065   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeReady                5m39s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.909496   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:59.909496   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.947382   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.947927   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.947995   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.947995   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.948080   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948145   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948145   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948229   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.948789   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.949533   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.949533   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.949650   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.949699   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:59.960393   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:59.961288   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.032368   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:51:00.032368   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:51:00.056388   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:51:00.058423   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:51:00.058423   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:51:00.093374   14012 command_runner.go:130] > .:53
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:51:00.093374   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:51:00.093374   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:51:00.096374   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:51:00.096374   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:51:00.129861   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:51:00.129883   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:51:00.129910   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:51:00.129910   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:51:00.130080   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:51:00.130080   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:51:00.130168   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:51:00.130194   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:51:00.130751   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:51:00.130751   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:51:00.130797   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:51:00.130797   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:51:00.131002   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:51:00.131322   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:51:00.131322   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:51:00.131817   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:51:00.131838   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:51:00.131905   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:51:00.132493   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:51:00.132493   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:51:00.132634   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:51:00.132810   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:51:00.132810   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132876   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:51:00.132935   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132952   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:51:00.133042   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:51:00.133042   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:51:00.133069   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:51:00.133102   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:51:00.133144   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:51:00.133144   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:51:00.133188   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:51:00.133188   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:51:00.133229   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:51:00.133229   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:51:00.133274   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:51:00.133380   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:51:00.133406   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:51:00.151228   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:51:00.151355   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:51:00.182001   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:51:00.182001   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:51:00.182001   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565654       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565717       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565730       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565753       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.566419       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.566456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.186995   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:51:00.186995   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:51:00.215630   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:51:00.215630   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:51:00.216454   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:51:00.216680   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:51:00.216925   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:51:00.217495   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:51:00.217495   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217705   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:51:00.217705   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:51:00.217705   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:51:00.217790   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:51:00.217815   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:51:00.218441   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:51:00.218441   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:51:00.218556   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:51:00.218806   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:51:00.218828   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:51:00.218856   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:51:00.228150   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:51:00.228298   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:51:00.255776   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:51:00.255776   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:51:00.256322   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:51:00.256322   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:51:00.256411   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:51:02.761802   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:51:02.761802   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.761802   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.761802   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.766423   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.767425   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.767425   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Audit-Id: a5332d78-2dfa-41a7-a889-d3a1aa1e43bb
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.767497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.771045   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1968"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86620 chars]
	I0624 05:51:02.776186   14012 system_pods.go:59] 12 kube-system pods found
	I0624 05:51:02.776186   14012 system_pods.go:61] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:51:02.776186   14012 system_pods.go:74] duration metric: took 3.7545293s to wait for pod list to return data ...
	I0624 05:51:02.776186   14012 default_sa.go:34] waiting for default service account to be created ...
	I0624 05:51:02.776186   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/default/serviceaccounts
	I0624 05:51:02.776186   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.776186   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.776186   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.779828   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:51:02.779828   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.779828   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.779828   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.780669   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.780669   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Content-Length: 262
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Audit-Id: 3d4479b7-8e67-4bb1-8585-674b083d983a
	I0624 05:51:02.780669   14012 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b646e33d-a735-486e-bc23-8dd57a7f6b3f","resourceVersion":"332","creationTimestamp":"2024-06-24T12:26:40Z"}}]}
	I0624 05:51:02.781040   14012 default_sa.go:45] found service account: "default"
	I0624 05:51:02.781115   14012 default_sa.go:55] duration metric: took 4.8535ms for default service account to be created ...
	I0624 05:51:02.781115   14012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 05:51:02.781195   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:51:02.781285   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.781285   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.781285   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.785800   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.785800   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Audit-Id: 19416cc3-9eeb-4828-bbbb-377b2329c235
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.786565   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.786565   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.790464   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86620 chars]
	I0624 05:51:02.793961   14012 system_pods.go:86] 12 kube-system pods found
	I0624 05:51:02.793961   14012 system_pods.go:89] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:51:02.793961   14012 system_pods.go:89] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running
	I0624 05:51:02.793961   14012 system_pods.go:89] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:51:02.794861   14012 system_pods.go:126] duration metric: took 13.7458ms to wait for k8s-apps to be running ...
	I0624 05:51:02.794947   14012 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 05:51:02.805870   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:51:02.834989   14012 system_svc.go:56] duration metric: took 40.042ms WaitForService to wait for kubelet
	I0624 05:51:02.834989   14012 kubeadm.go:576] duration metric: took 1m14.468494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:51:02.834989   14012 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:51:02.834989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes
	I0624 05:51:02.834989   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.834989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.834989   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.839573   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.839573   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.839878   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.839878   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Audit-Id: 860307c7-6447-4cb4-be2d-617cc1db0fb0
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.840393   14012 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:105] duration metric: took 6.3691ms to run NodePressure ...
	I0624 05:51:02.841358   14012 start.go:240] waiting for startup goroutines ...
	I0624 05:51:02.841358   14012 start.go:245] waiting for cluster config update ...
	I0624 05:51:02.841358   14012 start.go:254] writing updated cluster config ...
	I0624 05:51:02.845170   14012 out.go:177] 
	I0624 05:51:02.849288   14012 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:51:02.858779   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:51:02.858779   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:02.864794   14012 out.go:177] * Starting "multinode-876600-m02" worker node in "multinode-876600" cluster
	I0624 05:51:02.866784   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:51:02.866784   14012 cache.go:56] Caching tarball of preloaded images
	I0624 05:51:02.867783   14012 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:51:02.867783   14012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:51:02.867783   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:02.869782   14012 start.go:360] acquireMachinesLock for multinode-876600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:51:02.869782   14012 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-876600-m02"
	I0624 05:51:02.870783   14012 start.go:96] Skipping create...Using existing machine configuration
	I0624 05:51:02.870783   14012 fix.go:54] fixHost starting: m02
	I0624 05:51:02.870783   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:05.131447   14012 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 05:51:05.131533   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:05.131533   14012 fix.go:112] recreateIfNeeded on multinode-876600-m02: state=Stopped err=<nil>
	W0624 05:51:05.131533   14012 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 05:51:05.135439   14012 out.go:177] * Restarting existing hyperv VM for "multinode-876600-m02" ...
	I0624 05:51:05.137552   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600-m02
	I0624 05:51:08.249994   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:08.251012   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:08.251012   14012 main.go:141] libmachine: Waiting for host to start...
	I0624 05:51:08.251052   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:13.155592   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:13.155592   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:14.164598   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:16.457611   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:16.458354   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:16.458354   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:19.058407   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:19.059515   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:20.065836   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:22.305520   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:22.306282   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:22.306327   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:24.902710   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:24.903585   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:25.912870   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:28.210316   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:28.210316   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:28.210878   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:30.828602   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:30.828602   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:31.829668   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:36.751199   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:36.751886   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:36.756189   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:41.603473   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:41.603473   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:41.604055   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:41.607858   14012 machine.go:94] provisionDockerMachine start ...
	I0624 05:51:41.607858   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:43.794910   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:43.794910   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:43.795709   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:46.399615   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:46.399615   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:46.405745   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:46.405745   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:46.405745   14012 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:51:46.549778   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:51:46.549899   14012 buildroot.go:166] provisioning hostname "multinode-876600-m02"
	I0624 05:51:46.550027   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:48.763165   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:48.763165   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:48.763296   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:51.423313   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:51.423313   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:51.430170   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:51.430767   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:51.430767   14012 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600-m02 && echo "multinode-876600-m02" | sudo tee /etc/hostname
	I0624 05:51:51.604140   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600-m02
	
	I0624 05:51:51.604757   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:53.816718   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:53.816718   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:53.816938   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:56.468229   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:56.468229   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:56.474316   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:56.474316   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:56.474938   14012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:51:56.632615   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:51:56.632679   14012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:51:56.632679   14012 buildroot.go:174] setting up certificates
	I0624 05:51:56.632679   14012 provision.go:84] configureAuth start
	I0624 05:51:56.632679   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:58.829129   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:58.829129   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:58.829859   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:01.481269   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:01.481507   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:01.481507   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:03.621112   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:03.621112   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:03.621523   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:06.179846   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:06.179846   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:06.179846   14012 provision.go:143] copyHostCerts
	I0624 05:52:06.179846   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:52:06.179846   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:52:06.179846   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:52:06.180681   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:52:06.181724   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:52:06.181887   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:52:06.181887   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:52:06.181887   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:52:06.183156   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:52:06.183156   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:52:06.183156   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:52:06.183829   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:52:06.184561   14012 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600-m02 san=[127.0.0.1 172.31.216.161 localhost minikube multinode-876600-m02]
	I0624 05:52:06.555920   14012 provision.go:177] copyRemoteCerts
	I0624 05:52:06.566778   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:52:06.567791   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:08.765561   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:08.765561   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:08.765974   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:11.398907   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:11.398907   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:11.399240   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:11.516095   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9492051s)
	I0624 05:52:11.516180   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:52:11.516180   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:52:11.569260   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:52:11.569390   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0624 05:52:11.619060   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:52:11.619566   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 05:52:11.672123   14012 provision.go:87] duration metric: took 15.0393878s to configureAuth
	I0624 05:52:11.672123   14012 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:52:11.673119   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:52:11.673119   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:13.857294   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:13.857753   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:13.857753   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:16.502788   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:16.502788   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:16.510947   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:16.511487   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:16.511487   14012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:52:16.647937   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:52:16.647937   14012 buildroot.go:70] root file system type: tmpfs
	I0624 05:52:16.648636   14012 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:52:16.648694   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:18.786522   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:18.786522   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:18.786891   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:21.430377   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:21.431130   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:21.437655   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:21.438151   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:21.438299   14012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.217.139"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:52:21.611528   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.217.139
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:52:21.611749   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:23.820344   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:23.820344   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:23.820524   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:26.496862   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:26.496862   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:26.503204   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:26.503954   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:26.503954   14012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:52:28.827759   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:52:28.827759   14012 machine.go:97] duration metric: took 47.2197265s to provisionDockerMachine
	I0624 05:52:28.827759   14012 start.go:293] postStartSetup for "multinode-876600-m02" (driver="hyperv")
	I0624 05:52:28.827759   14012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:52:28.841025   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:52:28.841025   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:33.661417   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:33.661684   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:33.661930   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:33.774693   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9336497s)
	I0624 05:52:33.787058   14012 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:52:33.795230   14012 command_runner.go:130] > NAME=Buildroot
	I0624 05:52:33.795301   14012 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:52:33.795301   14012 command_runner.go:130] > ID=buildroot
	I0624 05:52:33.795301   14012 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:52:33.795301   14012 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:52:33.795301   14012 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:52:33.795665   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:52:33.795913   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:52:33.797273   14012 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:52:33.797333   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:52:33.812639   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:52:33.834112   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:52:33.885815   14012 start.go:296] duration metric: took 5.0580376s for postStartSetup
	I0624 05:52:33.885893   14012 fix.go:56] duration metric: took 1m31.0147735s for fixHost
	I0624 05:52:33.885989   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:36.065560   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:36.065806   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:36.065806   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:38.673023   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:38.673023   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:38.679746   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:38.680565   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:38.680565   14012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0624 05:52:38.816943   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719233558.822874030
	
	I0624 05:52:38.816943   14012 fix.go:216] guest clock: 1719233558.822874030
	I0624 05:52:38.816943   14012 fix.go:229] Guest: 2024-06-24 05:52:38.82287403 -0700 PDT Remote: 2024-06-24 05:52:33.8858934 -0700 PDT m=+298.090752301 (delta=4.93698063s)
	I0624 05:52:38.816943   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:41.001196   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:41.001394   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:41.001461   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:43.566492   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:43.566492   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:43.572264   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:43.572935   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:43.573188   14012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719233558
	I0624 05:52:43.719003   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:52:38 UTC 2024
	
	I0624 05:52:43.719003   14012 fix.go:236] clock set: Mon Jun 24 12:52:38 UTC 2024
	 (err=<nil>)
	I0624 05:52:43.719003   14012 start.go:83] releasing machines lock for "multinode-876600-m02", held for 1m40.8488477s
	I0624 05:52:43.719003   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:45.915700   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:45.916319   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:45.916319   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:48.474008   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:48.474008   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:48.481343   14012 out.go:177] * Found network options:
	I0624 05:52:48.484144   14012 out.go:177]   - NO_PROXY=172.31.217.139
	W0624 05:52:48.486571   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:52:48.489238   14012 out.go:177]   - NO_PROXY=172.31.217.139
	W0624 05:52:48.491362   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 05:52:48.492747   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:52:48.495118   14012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:52:48.495118   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:48.504964   14012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:52:48.504964   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:50.787667   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:53.541317   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:53.541317   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:53.541570   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:53.571463   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:53.571463   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:53.571888   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:53.727256   14012 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:52:53.727405   14012 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0624 05:52:53.727405   14012 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2224213s)
	I0624 05:52:53.727405   14012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2322673s)
	W0624 05:52:53.727548   14012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:52:53.741131   14012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:52:53.772054   14012 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:52:53.772054   14012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:52:53.772167   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:52:53.772231   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:52:53.806892   14012 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:52:53.819759   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:52:53.851658   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:52:53.872776   14012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:52:53.887863   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:52:53.919896   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:52:53.955146   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:52:53.987735   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:52:54.022364   14012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:52:54.058683   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:52:54.094850   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:52:54.127556   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:52:54.160153   14012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:52:54.180312   14012 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:52:54.193573   14012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:52:54.228653   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:54.437761   14012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:52:54.471437   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:52:54.485713   14012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:52:54.508214   14012 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:52:54.508214   14012 command_runner.go:130] > [Unit]
	I0624 05:52:54.508214   14012 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:52:54.508214   14012 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:52:54.508214   14012 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:52:54.508214   14012 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:52:54.508214   14012 command_runner.go:130] > StartLimitBurst=3
	I0624 05:52:54.508214   14012 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:52:54.508344   14012 command_runner.go:130] > [Service]
	I0624 05:52:54.508344   14012 command_runner.go:130] > Type=notify
	I0624 05:52:54.508344   14012 command_runner.go:130] > Restart=on-failure
	I0624 05:52:54.508466   14012 command_runner.go:130] > Environment=NO_PROXY=172.31.217.139
	I0624 05:52:54.508466   14012 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:52:54.508466   14012 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:52:54.508466   14012 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:52:54.508466   14012 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:52:54.508466   14012 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:52:54.508466   14012 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:52:54.508466   14012 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:52:54.508602   14012 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:52:54.508602   14012 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecStart=
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:52:54.508602   14012 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:52:54.508602   14012 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitCORE=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:52:54.508762   14012 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:52:54.508762   14012 command_runner.go:130] > TasksMax=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:52:54.508762   14012 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:52:54.508762   14012 command_runner.go:130] > Delegate=yes
	I0624 05:52:54.508762   14012 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:52:54.508909   14012 command_runner.go:130] > KillMode=process
	I0624 05:52:54.508909   14012 command_runner.go:130] > [Install]
	I0624 05:52:54.508909   14012 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:52:54.523324   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:52:54.561667   14012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:52:54.605022   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:52:54.640792   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:52:54.684770   14012 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:52:54.760274   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:52:54.786083   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:52:54.822073   14012 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:52:54.836364   14012 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:52:54.844010   14012 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:52:54.859952   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:52:54.880057   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:52:54.927694   14012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:52:55.159975   14012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:52:55.363797   14012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:52:55.363893   14012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:52:55.409772   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:55.606884   14012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:52:58.236153   14012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6291871s)
	I0624 05:52:58.249350   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:52:58.287725   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:52:58.322224   14012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:52:58.531177   14012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:52:58.733741   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:58.938496   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:52:58.984056   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:52:59.020806   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:59.229825   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:52:59.351125   14012 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:52:59.364217   14012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:52:59.373216   14012 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:52:59.373216   14012 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:52:59.373216   14012 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0624 05:52:59.373216   14012 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:52:59.373216   14012 command_runner.go:130] > Access: 2024-06-24 12:52:59.260247289 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] > Modify: 2024-06-24 12:52:59.260247289 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] > Change: 2024-06-24 12:52:59.264247281 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] >  Birth: -
	I0624 05:52:59.373216   14012 start.go:562] Will wait 60s for crictl version
	I0624 05:52:59.384201   14012 ssh_runner.go:195] Run: which crictl
	I0624 05:52:59.390214   14012 command_runner.go:130] > /usr/bin/crictl
	I0624 05:52:59.405008   14012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:52:59.472321   14012 command_runner.go:130] > Version:  0.1.0
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeName:  docker
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:52:59.472321   14012 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:52:59.481410   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:52:59.517651   14012 command_runner.go:130] > 26.1.4
	I0624 05:52:59.528512   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:52:59.564486   14012 command_runner.go:130] > 26.1.4
	I0624 05:52:59.568522   14012 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:52:59.571513   14012 out.go:177]   - env NO_PROXY=172.31.217.139
	I0624 05:52:59.574530   14012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:52:59.581476   14012 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:52:59.581476   14012 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:52:59.593469   14012 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:52:59.599775   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:52:59.622147   14012 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:52:59.622996   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:52:59.623685   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:01.830731   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:01.830731   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:01.830823   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:01.831648   14012 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.216.161
	I0624 05:53:01.831709   14012 certs.go:194] generating shared ca certs ...
	I0624 05:53:01.831709   14012 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:53:01.832301   14012 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:53:01.832727   14012 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:53:01.832894   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:53:01.833715   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:53:01.833715   14012 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:53:01.833715   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:53:01.835389   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:53:01.835389   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:53:01.835389   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:53:01.835969   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:01.836185   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:53:01.889410   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:53:01.946338   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:53:01.995283   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:53:02.046383   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:53:02.094942   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:53:02.141874   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:53:02.203435   14012 ssh_runner.go:195] Run: openssl version
	I0624 05:53:02.212760   14012 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:53:02.225838   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:53:02.262099   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:53:02.269865   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:53:02.269865   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:53:02.284609   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:53:02.294976   14012 command_runner.go:130] > 51391683
	I0624 05:53:02.308604   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:53:02.346804   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:53:02.381782   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.389490   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.390324   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.406339   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.415412   14012 command_runner.go:130] > 3ec20f2e
	I0624 05:53:02.430876   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:53:02.470755   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:53:02.509023   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.517104   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.517507   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.532647   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.541811   14012 command_runner.go:130] > b5213941
	I0624 05:53:02.554737   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:53:02.589160   14012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:53:02.595002   14012 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:53:02.596033   14012 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:53:02.596033   14012 kubeadm.go:928] updating node {m02 172.31.216.161 8443 v1.30.2 docker false true} ...
	I0624 05:53:02.596033   14012 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:53:02.610938   14012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:53:02.631186   14012 command_runner.go:130] > kubeadm
	I0624 05:53:02.631257   14012 command_runner.go:130] > kubectl
	I0624 05:53:02.631257   14012 command_runner.go:130] > kubelet
	I0624 05:53:02.631300   14012 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 05:53:02.643970   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0624 05:53:02.664014   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0624 05:53:02.698068   14012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:53:02.743429   14012 ssh_runner.go:195] Run: grep 172.31.217.139	control-plane.minikube.internal$ /etc/hosts
	I0624 05:53:02.750413   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.217.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:53:02.790956   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:53:03.012241   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:53:03.040551   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:03.040624   14012 start.go:316] joinCluster: &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:53:03.040624   14012 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.31.216.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0624 05:53:03.040624   14012 host.go:66] Checking if "multinode-876600-m02" exists ...
	I0624 05:53:03.042127   14012 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:53:03.042821   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:53:03.043517   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:05.260684   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:05.260743   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:05.260743   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:05.261138   14012 api_server.go:166] Checking apiserver status ...
	I0624 05:53:05.274606   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:53:05.274606   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:07.470842   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:07.470842   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:07.471036   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:53:10.133303   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:53:10.133303   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:10.133897   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:53:10.254454   14012 command_runner.go:130] > 1846
	I0624 05:53:10.254544   14012 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9799197s)
	I0624 05:53:10.268784   14012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup
	W0624 05:53:10.286805   14012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0624 05:53:10.301053   14012 ssh_runner.go:195] Run: ls
	I0624 05:53:10.308709   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:53:10.316337   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:53:10.329149   14012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-876600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0624 05:53:10.492619   14012 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t9wzm, kube-system/kube-proxy-hjjs8
	I0624 05:53:13.514425   14012 command_runner.go:130] > node/multinode-876600-m02 cordoned
	I0624 05:53:13.514564   14012 command_runner.go:130] > pod "busybox-fc5497c4f-vqhsz" has DeletionTimestamp older than 1 seconds, skipping
	I0624 05:53:13.514564   14012 command_runner.go:130] > node/multinode-876600-m02 drained
	I0624 05:53:13.514564   14012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-876600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.185403s)
	I0624 05:53:13.514564   14012 node.go:128] successfully drained node "multinode-876600-m02"
	I0624 05:53:13.514812   14012 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0624 05:53:13.514950   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:53:15.702742   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:15.702844   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:15.702844   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:53:18.348619   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:53:18.348825   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:18.349041   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:53:18.878723   14012 command_runner.go:130] ! W0624 12:53:18.884032    1549 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-876600" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-876600
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-876600: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-876600" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-876600	172.31.211.219
multinode-876600-m02	172.31.221.199
multinode-876600-m03	172.31.210.168

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-876600 -n multinode-876600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-876600 -n multinode-876600: (12.2883712s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 logs -n 25: (11.5933725s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-876600 cp testdata\cp-test.txt                                                                                 | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:37 PDT | 24 Jun 24 05:38 PDT |
	|         | multinode-876600-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:38 PDT |
	|         | multinode-876600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:38 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:38 PDT |
	|         | multinode-876600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:38 PDT |
	|         | multinode-876600:/home/docker/cp-test_multinode-876600-m02_multinode-876600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:38 PDT |
	|         | multinode-876600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n multinode-876600 sudo cat                                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:38 PDT | 24 Jun 24 05:39 PDT |
	|         | /home/docker/cp-test_multinode-876600-m02_multinode-876600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:39 PDT | 24 Jun 24 05:39 PDT |
	|         | multinode-876600-m03:/home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:39 PDT | 24 Jun 24 05:39 PDT |
	|         | multinode-876600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n multinode-876600-m03 sudo cat                                                                    | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:39 PDT | 24 Jun 24 05:39 PDT |
	|         | /home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp testdata\cp-test.txt                                                                                 | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:39 PDT | 24 Jun 24 05:39 PDT |
	|         | multinode-876600-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:39 PDT | 24 Jun 24 05:40 PDT |
	|         | multinode-876600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:40 PDT |
	|         | multinode-876600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:40 PDT |
	|         | multinode-876600:/home/docker/cp-test_multinode-876600-m03_multinode-876600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:40 PDT |
	|         | multinode-876600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n multinode-876600 sudo cat                                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:40 PDT |
	|         | /home/docker/cp-test_multinode-876600-m03_multinode-876600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt                                                        | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:40 PDT | 24 Jun 24 05:41 PDT |
	|         | multinode-876600-m02:/home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n                                                                                                  | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:41 PDT | 24 Jun 24 05:41 PDT |
	|         | multinode-876600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-876600 ssh -n multinode-876600-m02 sudo cat                                                                    | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:41 PDT | 24 Jun 24 05:41 PDT |
	|         | /home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-876600 node stop m03                                                                                           | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:41 PDT | 24 Jun 24 05:41 PDT |
	| node    | multinode-876600 node start                                                                                              | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:42 PDT | 24 Jun 24 05:45 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-876600                                                                                                 | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:45 PDT |                     |
	| stop    | -p multinode-876600                                                                                                      | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:45 PDT | 24 Jun 24 05:47 PDT |
	| start   | -p multinode-876600                                                                                                      | multinode-876600 | minikube1\jenkins | v1.33.1 | 24 Jun 24 05:47 PDT |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 05:47:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 05:47:35.880785   14012 out.go:291] Setting OutFile to fd 912 ...
	I0624 05:47:35.881481   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:47:35.881481   14012 out.go:304] Setting ErrFile to fd 500...
	I0624 05:47:35.881481   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:47:35.902984   14012 out.go:298] Setting JSON to false
	I0624 05:47:35.908378   14012 start.go:129] hostinfo: {"hostname":"minikube1","uptime":23711,"bootTime":1719209544,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 05:47:35.908378   14012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 05:47:35.977561   14012 out.go:177] * [multinode-876600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 05:47:36.093958   14012 notify.go:220] Checking for updates...
	I0624 05:47:36.136283   14012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:47:36.221393   14012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0624 05:47:36.229694   14012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 05:47:36.266219   14012 out.go:177]   - MINIKUBE_LOCATION=19124
	I0624 05:47:36.282155   14012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0624 05:47:36.287557   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:47:36.289122   14012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 05:47:41.683761   14012 out.go:177] * Using the hyperv driver based on existing profile
	I0624 05:47:41.716198   14012 start.go:297] selected driver: hyperv
	I0624 05:47:41.716198   14012 start.go:901] validating driver "hyperv" against &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:47:41.724181   14012 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0624 05:47:41.777496   14012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:47:41.777496   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:47:41.777496   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:47:41.777786   14012 start.go:340] cluster config:
	{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.211.219 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:47:41.778085   14012 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 05:47:41.822930   14012 out.go:177] * Starting "multinode-876600" primary control-plane node in "multinode-876600" cluster
	I0624 05:47:41.834828   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:47:41.835034   14012 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 05:47:41.835150   14012 cache.go:56] Caching tarball of preloaded images
	I0624 05:47:41.835578   14012 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:47:41.835832   14012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:47:41.836267   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:47:41.839458   14012 start.go:360] acquireMachinesLock for multinode-876600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:47:41.840016   14012 start.go:364] duration metric: took 558.9µs to acquireMachinesLock for "multinode-876600"
	I0624 05:47:41.840386   14012 start.go:96] Skipping create...Using existing machine configuration
	I0624 05:47:41.840441   14012 fix.go:54] fixHost starting: 
	I0624 05:47:41.841211   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:44.499606   14012 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 05:47:44.510968   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:44.510968   14012 fix.go:112] recreateIfNeeded on multinode-876600: state=Stopped err=<nil>
	W0624 05:47:44.510968   14012 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 05:47:44.521768   14012 out.go:177] * Restarting existing hyperv VM for "multinode-876600" ...
	I0624 05:47:44.560050   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600
	I0624 05:47:47.561367   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:47.561464   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:47.561464   14012 main.go:141] libmachine: Waiting for host to start...
	I0624 05:47:47.561543   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:49.748922   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:47:49.761167   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:49.761250   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:47:52.160828   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:52.160828   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:53.172014   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:47:55.333472   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:47:55.344925   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:55.344925   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:47:57.777506   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:47:57.777506   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:47:58.778248   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:00.944614   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:00.953095   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:00.953254   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:03.399362   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:48:03.399362   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:04.403630   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:06.562200   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:06.562635   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:06.562741   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:09.027310   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:48:09.027310   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:10.041460   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:12.234249   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:12.234249   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:12.241903   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:14.762054   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:14.762054   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:14.773865   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:16.820547   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:19.311624   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:19.311821   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:19.312076   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:48:19.315026   14012 machine.go:94] provisionDockerMachine start ...
	I0624 05:48:19.315109   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:21.367280   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:21.367280   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:21.377733   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:23.795383   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:23.795383   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:23.812454   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:23.813213   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:23.813213   14012 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:48:23.941448   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:48:23.941555   14012 buildroot.go:166] provisioning hostname "multinode-876600"
	I0624 05:48:23.941637   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:26.031170   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:26.031170   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:26.043047   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:28.498014   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:28.498014   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:28.514891   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:28.515507   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:28.515507   14012 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600 && echo "multinode-876600" | sudo tee /etc/hostname
	I0624 05:48:28.665093   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600
	
	I0624 05:48:28.665218   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:30.705686   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:30.717217   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:30.717403   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:33.205040   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:33.205040   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:33.222256   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:33.222256   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:33.222903   14012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:48:33.360338   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:48:33.360455   14012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:48:33.360605   14012 buildroot.go:174] setting up certificates
	I0624 05:48:33.360605   14012 provision.go:84] configureAuth start
	I0624 05:48:33.360653   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:35.484443   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:35.484443   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:35.484651   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:37.913308   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:37.913308   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:37.924422   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:39.990065   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:39.990065   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:40.000535   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:42.412433   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:42.412433   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:42.423902   14012 provision.go:143] copyHostCerts
	I0624 05:48:42.424151   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:48:42.424478   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:48:42.424478   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:48:42.424728   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:48:42.426192   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:48:42.426547   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:48:42.426547   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:48:42.426547   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:48:42.428071   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:48:42.428368   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:48:42.428368   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:48:42.428767   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:48:42.429871   14012 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600 san=[127.0.0.1 172.31.217.139 localhost minikube multinode-876600]
	I0624 05:48:42.579627   14012 provision.go:177] copyRemoteCerts
	I0624 05:48:42.590215   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:48:42.590215   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:44.603809   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:44.603809   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:44.614278   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:47.051839   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:47.051839   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:47.063884   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:48:47.169335   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5791028s)
	I0624 05:48:47.169424   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:48:47.169954   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:48:47.215116   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:48:47.215637   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0624 05:48:47.260635   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:48:47.261202   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0624 05:48:47.306709   14012 provision.go:87] duration metric: took 13.9459462s to configureAuth
	I0624 05:48:47.306769   14012 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:48:47.307934   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:48:47.308157   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:49.355089   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:49.366289   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:49.366415   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:51.851609   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:51.851609   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:51.857185   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:51.857941   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:51.857941   14012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:48:51.983187   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:48:51.983187   14012 buildroot.go:70] root file system type: tmpfs
	I0624 05:48:51.983661   14012 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:48:51.983803   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:54.063579   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:54.063579   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:54.074296   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:48:56.495607   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:48:56.495607   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:56.517067   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:48:56.517299   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:48:56.517299   14012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:48:56.679303   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:48:56.679440   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:48:58.750573   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:48:58.750573   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:48:58.762462   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:01.332721   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:01.343878   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:01.351330   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:01.351330   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:01.351330   14012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:49:03.817641   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:49:03.817745   14012 machine.go:97] duration metric: took 44.5025033s to provisionDockerMachine
	I0624 05:49:03.817791   14012 start.go:293] postStartSetup for "multinode-876600" (driver="hyperv")
	I0624 05:49:03.817791   14012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:49:03.828976   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:49:03.828976   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:05.917203   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:05.928220   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:05.928404   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:08.384574   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:08.384574   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:08.385107   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:08.487134   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6581409s)
	I0624 05:49:08.505521   14012 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:49:08.517083   14012 command_runner.go:130] > NAME=Buildroot
	I0624 05:49:08.517188   14012 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:49:08.517188   14012 command_runner.go:130] > ID=buildroot
	I0624 05:49:08.517188   14012 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:49:08.517188   14012 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:49:08.517188   14012 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:49:08.517319   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:49:08.517791   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:49:08.519070   14012 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:49:08.519070   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:49:08.530635   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:49:08.550071   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:49:08.595311   14012 start.go:296] duration metric: took 4.7775028s for postStartSetup
	I0624 05:49:08.595509   14012 fix.go:56] duration metric: took 1m26.7547463s for fixHost
	I0624 05:49:08.595663   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:10.624723   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:10.624723   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:10.624866   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:13.078139   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:13.091367   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:13.097690   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:13.098290   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:13.098290   14012 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 05:49:13.219657   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719233353.215073916
	
	I0624 05:49:13.219657   14012 fix.go:216] guest clock: 1719233353.215073916
	I0624 05:49:13.219754   14012 fix.go:229] Guest: 2024-06-24 05:49:13.215073916 -0700 PDT Remote: 2024-06-24 05:49:08.5955439 -0700 PDT m=+92.801165501 (delta=4.619530016s)
	I0624 05:49:13.219836   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:15.286491   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:15.286491   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:15.286740   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:17.715232   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:17.719070   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:17.725756   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:49:17.726686   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.217.139 22 <nil> <nil>}
	I0624 05:49:17.726686   14012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719233353
	I0624 05:49:17.859280   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:49:13 UTC 2024
	
	I0624 05:49:17.859350   14012 fix.go:236] clock set: Mon Jun 24 12:49:13 UTC 2024
	 (err=<nil>)
	I0624 05:49:17.859385   14012 start.go:83] releasing machines lock for "multinode-876600", held for 1m36.0190136s
	I0624 05:49:17.859559   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:19.941531   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:19.953082   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:19.953152   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:22.374617   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:22.374617   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:22.391320   14012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:49:22.391448   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:22.403553   14012 ssh_runner.go:195] Run: cat /version.json
	I0624 05:49:22.403553   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:49:24.533805   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:24.533924   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:24.534028   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:24.544576   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:49:27.126497   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:27.126497   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:27.138292   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:27.157547   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:49:27.157547   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:49:27.162012   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:49:27.235769   14012 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0624 05:49:27.235929   14012 ssh_runner.go:235] Completed: cat /version.json: (4.8321981s)
	I0624 05:49:27.248698   14012 ssh_runner.go:195] Run: systemctl --version
	I0624 05:49:27.307856   14012 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:49:27.307946   14012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9166072s)
	I0624 05:49:27.308036   14012 command_runner.go:130] > systemd 252 (252)
	I0624 05:49:27.308077   14012 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0624 05:49:27.319284   14012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:49:27.322999   14012 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0624 05:49:27.328935   14012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:49:27.339751   14012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:49:27.365596   14012 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:49:27.367582   14012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:49:27.367582   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:49:27.367840   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:49:27.398576   14012 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:49:27.413526   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:49:27.448573   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:49:27.469595   14012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:49:27.483173   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:49:27.516238   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:49:27.544259   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:49:27.573981   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:49:27.606795   14012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:49:27.637009   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:49:27.667351   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:49:27.698788   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:49:27.730030   14012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:49:27.746470   14012 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:49:27.759990   14012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:49:27.787789   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:27.978133   14012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:49:28.006804   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:49:28.022893   14012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:49:28.044875   14012 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:49:28.044875   14012 command_runner.go:130] > [Unit]
	I0624 05:49:28.044875   14012 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:49:28.044875   14012 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:49:28.044875   14012 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:49:28.044875   14012 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:49:28.044875   14012 command_runner.go:130] > StartLimitBurst=3
	I0624 05:49:28.044875   14012 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:49:28.044875   14012 command_runner.go:130] > [Service]
	I0624 05:49:28.045042   14012 command_runner.go:130] > Type=notify
	I0624 05:49:28.045042   14012 command_runner.go:130] > Restart=on-failure
	I0624 05:49:28.045042   14012 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:49:28.045042   14012 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:49:28.045042   14012 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:49:28.045042   14012 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:49:28.045042   14012 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:49:28.045042   14012 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:49:28.045042   14012 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:49:28.045178   14012 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:49:28.045178   14012 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecStart=
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:49:28.045178   14012 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:49:28.045178   14012 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:49:28.045178   14012 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:49:28.045178   14012 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:49:28.045313   14012 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:49:28.045313   14012 command_runner.go:130] > LimitCORE=infinity
	I0624 05:49:28.045397   14012 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:49:28.045397   14012 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:49:28.045397   14012 command_runner.go:130] > TasksMax=infinity
	I0624 05:49:28.045397   14012 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:49:28.045397   14012 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:49:28.045397   14012 command_runner.go:130] > Delegate=yes
	I0624 05:49:28.045397   14012 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:49:28.045397   14012 command_runner.go:130] > KillMode=process
	I0624 05:49:28.045499   14012 command_runner.go:130] > [Install]
	I0624 05:49:28.045499   14012 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:49:28.059973   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:49:28.091667   14012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:49:28.138019   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:49:28.175833   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:49:28.209589   14012 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:49:28.266376   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:49:28.289907   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:49:28.317969   14012 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:49:28.333318   14012 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:49:28.339785   14012 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:49:28.350418   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:49:28.370602   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:49:28.410312   14012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:49:28.602162   14012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:49:28.773723   14012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:49:28.774011   14012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:49:28.820013   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:28.989642   14012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:49:31.630268   14012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.640522s)
	I0624 05:49:31.644407   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:49:31.682245   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:49:31.717283   14012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:49:31.892114   14012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:49:32.072298   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:32.250037   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:49:32.291868   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:49:32.328978   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:32.504679   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:49:32.605839   14012 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:49:32.619028   14012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:49:32.628363   14012 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:49:32.628498   14012 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:49:32.628498   14012 command_runner.go:130] > Device: 0,22	Inode: 865         Links: 1
	I0624 05:49:32.628498   14012 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:49:32.628498   14012 command_runner.go:130] > Access: 2024-06-24 12:49:32.514780067 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] > Modify: 2024-06-24 12:49:32.514780067 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] > Change: 2024-06-24 12:49:32.518779983 +0000
	I0624 05:49:32.628498   14012 command_runner.go:130] >  Birth: -
	I0624 05:49:32.628621   14012 start.go:562] Will wait 60s for crictl version
	I0624 05:49:32.641328   14012 ssh_runner.go:195] Run: which crictl
	I0624 05:49:32.646872   14012 command_runner.go:130] > /usr/bin/crictl
	I0624 05:49:32.659436   14012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:49:32.719145   14012 command_runner.go:130] > Version:  0.1.0
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeName:  docker
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:49:32.719145   14012 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:49:32.719145   14012 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:49:32.728177   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:49:32.761002   14012 command_runner.go:130] > 26.1.4
	I0624 05:49:32.772410   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:49:32.801743   14012 command_runner.go:130] > 26.1.4
	I0624 05:49:32.805936   14012 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:49:32.805936   14012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:49:32.810365   14012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:49:32.813498   14012 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:49:32.813498   14012 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:49:32.824921   14012 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:49:32.830964   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:49:32.850186   14012 kubeadm.go:877] updating cluster {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0624 05:49:32.850826   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:49:32.859963   14012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:49:32.884850   14012 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0624 05:49:32.884850   14012 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0624 05:49:32.884938   14012 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0624 05:49:32.884938   14012 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0624 05:49:32.884938   14012 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:49:32.885016   14012 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0624 05:49:32.885109   14012 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0624 05:49:32.885136   14012 docker.go:615] Images already preloaded, skipping extraction
	I0624 05:49:32.895578   14012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0624 05:49:32.926566   14012 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0624 05:49:32.926654   14012 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0624 05:49:32.926654   14012 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0624 05:49:32.926654   14012 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0624 05:49:32.926654   14012 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0624 05:49:32.926782   14012 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0624 05:49:32.926782   14012 cache_images.go:84] Images are preloaded, skipping loading
	I0624 05:49:32.926910   14012 kubeadm.go:928] updating node { 172.31.217.139 8443 v1.30.2 docker true true} ...
	I0624 05:49:32.927169   14012 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.217.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:49:32.936864   14012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0624 05:49:32.984193   14012 command_runner.go:130] > cgroupfs
	I0624 05:49:32.984478   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:49:32.984633   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:49:32.984670   14012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0624 05:49:32.984743   14012 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.31.217.139 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-876600 NodeName:multinode-876600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.31.217.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.31.217.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0624 05:49:32.984843   14012 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.31.217.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-876600"
	  kubeletExtraArgs:
	    node-ip: 172.31.217.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0624 05:49:32.998136   14012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubeadm
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubectl
	I0624 05:49:33.017408   14012 command_runner.go:130] > kubelet
	I0624 05:49:33.017527   14012 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 05:49:33.030864   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0624 05:49:33.040697   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0624 05:49:33.079203   14012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:49:33.107622   14012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0624 05:49:33.161512   14012 ssh_runner.go:195] Run: grep 172.31.217.139	control-plane.minikube.internal$ /etc/hosts
	I0624 05:49:33.168275   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.217.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:49:33.205294   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:33.390514   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:49:33.413655   14012 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.217.139
	I0624 05:49:33.413655   14012 certs.go:194] generating shared ca certs ...
	I0624 05:49:33.413655   14012 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.420300   14012 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:49:33.420962   14012 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:49:33.421162   14012 certs.go:256] generating profile certs ...
	I0624 05:49:33.422002   14012 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\client.key
	I0624 05:49:33.422002   14012 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81
	I0624 05:49:33.422002   14012 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.31.217.139]
	I0624 05:49:33.687208   14012 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 ...
	I0624 05:49:33.687208   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81: {Name:mke29aa285d1480a4c0ffe6b00fae4b653965b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.696105   14012 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81 ...
	I0624 05:49:33.696105   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81: {Name:mk9bb0a6fbcaf4c73bc8f11ba3bdac939b7058e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:33.697907   14012 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt.b2bcbf81 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt
	I0624 05:49:33.714027   14012 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key.b2bcbf81 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key
	I0624 05:49:33.715812   14012 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key
	I0624 05:49:33.715847   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:49:33.716067   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:49:33.716196   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:49:33.716196   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:49:33.716599   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0624 05:49:33.716959   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0624 05:49:33.717117   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0624 05:49:33.717117   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0624 05:49:33.717702   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:49:33.718538   14012 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:49:33.718538   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:49:33.718538   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:49:33.719380   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:49:33.719720   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:49:33.720266   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:49:33.720530   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:49:33.720714   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:49:33.720901   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:33.722347   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:49:33.769893   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:49:33.822980   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:49:33.871005   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:49:33.919551   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0624 05:49:33.968621   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0624 05:49:34.017050   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0624 05:49:34.061235   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0624 05:49:34.108648   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:49:34.154736   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:49:34.198680   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:49:34.248395   14012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0624 05:49:34.296627   14012 ssh_runner.go:195] Run: openssl version
	I0624 05:49:34.305304   14012 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:49:34.318218   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:49:34.354304   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:49:34.362303   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:49:34.362303   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:49:34.374333   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:49:34.383618   14012 command_runner.go:130] > 51391683
	I0624 05:49:34.398596   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:49:34.430042   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:49:34.459015   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.466217   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.466217   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.479295   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:49:34.486871   14012 command_runner.go:130] > 3ec20f2e
	I0624 05:49:34.503042   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:49:34.538978   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:49:34.571703   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.578295   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.578361   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.591714   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:49:34.594884   14012 command_runner.go:130] > b5213941
	I0624 05:49:34.613366   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:49:34.645755   14012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:49:34.654748   14012 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:49:34.654748   14012 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0624 05:49:34.654748   14012 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0624 05:49:34.654748   14012 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0624 05:49:34.654748   14012 command_runner.go:130] > Access: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] > Modify: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] > Change: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.654748   14012 command_runner.go:130] >  Birth: 2024-06-24 12:26:15.616289600 +0000
	I0624 05:49:34.668786   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0624 05:49:34.678596   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.691419   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0624 05:49:34.701314   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.714808   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0624 05:49:34.723729   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.737365   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0624 05:49:34.746988   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.759902   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0624 05:49:34.765490   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.787137   14012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0624 05:49:34.789296   14012 command_runner.go:130] > Certificate will not expire
	I0624 05:49:34.796838   14012 kubeadm.go:391] StartCluster: {Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.221.199 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:49:34.805876   14012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 05:49:34.856777   14012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0624 05:49:34.876776   14012 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0624 05:49:34.876845   14012 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0624 05:49:34.876910   14012 command_runner.go:130] > /var/lib/minikube/etcd:
	I0624 05:49:34.876910   14012 command_runner.go:130] > member
	W0624 05:49:34.876975   14012 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0624 05:49:34.877006   14012 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0624 05:49:34.877129   14012 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0624 05:49:34.890260   14012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0624 05:49:34.909364   14012 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0624 05:49:34.910063   14012 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-876600" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:34.910972   14012 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-876600" cluster setting kubeconfig missing "multinode-876600" context setting]
	I0624 05:49:34.912417   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:34.926855   14012 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:34.927834   14012 kapi.go:59] client config for multinode-876600: &rest.Config{Host:"https://172.31.217.139:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-876600/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x283cde0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0624 05:49:34.929776   14012 cert_rotation.go:137] Starting client certificate rotation controller
	I0624 05:49:34.941729   14012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0624 05:49:34.959286   14012 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0624 05:49:34.959615   14012 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0624 05:49:34.959733   14012 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0624 05:49:34.959733   14012 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0624 05:49:34.959733   14012 command_runner.go:130] >  kind: InitConfiguration
	I0624 05:49:34.959792   14012 command_runner.go:130] >  localAPIEndpoint:
	I0624 05:49:34.959792   14012 command_runner.go:130] > -  advertiseAddress: 172.31.211.219
	I0624 05:49:34.959792   14012 command_runner.go:130] > +  advertiseAddress: 172.31.217.139
	I0624 05:49:34.959792   14012 command_runner.go:130] >    bindPort: 8443
	I0624 05:49:34.959792   14012 command_runner.go:130] >  bootstrapTokens:
	I0624 05:49:34.959836   14012 command_runner.go:130] >    - groups:
	I0624 05:49:34.959836   14012 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0624 05:49:34.959836   14012 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0624 05:49:34.959836   14012 command_runner.go:130] >    name: "multinode-876600"
	I0624 05:49:34.959869   14012 command_runner.go:130] >    kubeletExtraArgs:
	I0624 05:49:34.959869   14012 command_runner.go:130] > -    node-ip: 172.31.211.219
	I0624 05:49:34.959869   14012 command_runner.go:130] > +    node-ip: 172.31.217.139
	I0624 05:49:34.959869   14012 command_runner.go:130] >    taints: []
	I0624 05:49:34.959869   14012 command_runner.go:130] >  ---
	I0624 05:49:34.959869   14012 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0624 05:49:34.959869   14012 command_runner.go:130] >  kind: ClusterConfiguration
	I0624 05:49:34.959869   14012 command_runner.go:130] >  apiServer:
	I0624 05:49:34.959869   14012 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.31.211.219"]
	I0624 05:49:34.959869   14012 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	I0624 05:49:34.959869   14012 command_runner.go:130] >    extraArgs:
	I0624 05:49:34.959869   14012 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0624 05:49:34.959869   14012 command_runner.go:130] >  controllerManager:
	I0624 05:49:34.959869   14012 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.31.211.219
	+  advertiseAddress: 172.31.217.139
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-876600"
	   kubeletExtraArgs:
	-    node-ip: 172.31.211.219
	+    node-ip: 172.31.217.139
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.31.211.219"]
	+  certSANs: ["127.0.0.1", "localhost", "172.31.217.139"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0624 05:49:34.959869   14012 kubeadm.go:1154] stopping kube-system containers ...
	I0624 05:49:34.969168   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0624 05:49:35.002833   14012 command_runner.go:130] > 83a09faf1e2d
	I0624 05:49:35.002833   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:49:35.002833   14012 command_runner.go:130] > caf1b076e912
	I0624 05:49:35.002833   14012 command_runner.go:130] > b42fe71aa0d7
	I0624 05:49:35.002833   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:49:35.002833   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:49:35.002833   14012 command_runner.go:130] > 2f2af473df8a
	I0624 05:49:35.002833   14012 command_runner.go:130] > d072caca0861
	I0624 05:49:35.002833   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:49:35.002833   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:49:35.002833   14012 command_runner.go:130] > d781e9872808
	I0624 05:49:35.002964   14012 command_runner.go:130] > eefbf63a6c05
	I0624 05:49:35.002964   14012 command_runner.go:130] > 0449d7721b5b
	I0624 05:49:35.002964   14012 command_runner.go:130] > 5f89e0f2608f
	I0624 05:49:35.002964   14012 command_runner.go:130] > 6d1c3ec125c9
	I0624 05:49:35.002964   14012 command_runner.go:130] > 6184b2eb79fd
	I0624 05:49:35.003060   14012 docker.go:483] Stopping containers: [83a09faf1e2d f46bdc12472e caf1b076e912 b42fe71aa0d7 f74eb1beb274 b0dd966ee710 2f2af473df8a d072caca0861 7174bdea66e2 d7d8d18e1b11 d781e9872808 eefbf63a6c05 0449d7721b5b 5f89e0f2608f 6d1c3ec125c9 6184b2eb79fd]
	I0624 05:49:35.012183   14012 ssh_runner.go:195] Run: docker stop 83a09faf1e2d f46bdc12472e caf1b076e912 b42fe71aa0d7 f74eb1beb274 b0dd966ee710 2f2af473df8a d072caca0861 7174bdea66e2 d7d8d18e1b11 d781e9872808 eefbf63a6c05 0449d7721b5b 5f89e0f2608f 6d1c3ec125c9 6184b2eb79fd
	I0624 05:49:35.037744   14012 command_runner.go:130] > 83a09faf1e2d
	I0624 05:49:35.037832   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:49:35.037832   14012 command_runner.go:130] > caf1b076e912
	I0624 05:49:35.037832   14012 command_runner.go:130] > b42fe71aa0d7
	I0624 05:49:35.037963   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:49:35.037963   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:49:35.037963   14012 command_runner.go:130] > 2f2af473df8a
	I0624 05:49:35.038040   14012 command_runner.go:130] > d072caca0861
	I0624 05:49:35.038040   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:49:35.038040   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:49:35.038040   14012 command_runner.go:130] > d781e9872808
	I0624 05:49:35.038040   14012 command_runner.go:130] > eefbf63a6c05
	I0624 05:49:35.038040   14012 command_runner.go:130] > 0449d7721b5b
	I0624 05:49:35.038040   14012 command_runner.go:130] > 5f89e0f2608f
	I0624 05:49:35.038040   14012 command_runner.go:130] > 6d1c3ec125c9
	I0624 05:49:35.038040   14012 command_runner.go:130] > 6184b2eb79fd
	I0624 05:49:35.054794   14012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0624 05:49:35.096732   14012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0624 05:49:35.099603   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0624 05:49:35.114198   14012 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:49:35.114559   14012 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0624 05:49:35.114559   14012 kubeadm.go:156] found existing configuration files:
	
	I0624 05:49:35.126633   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0624 05:49:35.136093   14012 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:49:35.144982   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0624 05:49:35.159249   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0624 05:49:35.188074   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0624 05:49:35.190043   14012 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:49:35.205735   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0624 05:49:35.216427   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0624 05:49:35.246360   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0624 05:49:35.262403   14012 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:49:35.262763   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0624 05:49:35.276071   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0624 05:49:35.307365   14012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0624 05:49:35.323355   14012 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:49:35.324262   14012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0624 05:49:35.337531   14012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0624 05:49:35.369923   14012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0624 05:49:35.388354   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0624 05:49:35.702179   14012 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0624 05:49:35.702384   14012 command_runner.go:130] > [certs] Using the existing "sa" key
	I0624 05:49:35.702384   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.339007   14012 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0624 05:49:37.339082   14012 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0624 05:49:37.339143   14012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6367526s)
	I0624 05:49:37.339221   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.632544   14012 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0624 05:49:37.632642   14012 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0624 05:49:37.632642   14012 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0624 05:49:37.632715   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.725506   14012 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0624 05:49:37.725618   14012 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0624 05:49:37.725768   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:37.822436   14012 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0624 05:49:37.822599   14012 api_server.go:52] waiting for apiserver process to appear ...
	I0624 05:49:37.836555   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:38.351904   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:38.836201   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.343829   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.842660   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:49:39.862505   14012 command_runner.go:130] > 1846
	I0624 05:49:39.866272   14012 api_server.go:72] duration metric: took 2.0436984s to wait for apiserver process to appear ...
	I0624 05:49:39.866390   14012 api_server.go:88] waiting for apiserver healthz status ...
	I0624 05:49:39.866461   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.081334   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0624 05:49:43.081400   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0624 05:49:43.081400   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.111655   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0624 05:49:43.117294   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0624 05:49:43.374296   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.382481   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:43.382481   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:43.872578   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:43.897521   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:43.902445   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:44.372784   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:44.382947   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0624 05:49:44.383024   14012 api_server.go:103] status: https://172.31.217.139:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0624 05:49:44.885781   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:49:44.897958   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:49:44.901078   14012 round_trippers.go:463] GET https://172.31.217.139:8443/version
	I0624 05:49:44.901177   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:44.901177   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:44.901177   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:44.915530   14012 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 05:49:44.915581   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:44.915581   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:44.915615   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:44.915615   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Content-Length: 263
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:44 GMT
	I0624 05:49:44.915615   14012 round_trippers.go:580]     Audit-Id: 9ff5c67f-66f8-416b-8ddd-e8f42a33bd36
	I0624 05:49:44.915652   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:44.915687   14012 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0624 05:49:44.915843   14012 api_server.go:141] control plane version: v1.30.2
	I0624 05:49:44.915927   14012 api_server.go:131] duration metric: took 5.049479s to wait for apiserver health ...
	I0624 05:49:44.915927   14012 cni.go:84] Creating CNI manager for ""
	I0624 05:49:44.915927   14012 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0624 05:49:44.919402   14012 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0624 05:49:44.931920   14012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0624 05:49:44.953124   14012 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0624 05:49:44.953288   14012 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0624 05:49:44.953288   14012 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0624 05:49:44.953318   14012 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0624 05:49:44.953318   14012 command_runner.go:130] > Access: 2024-06-24 12:48:11.919340600 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] > Modify: 2024-06-21 04:42:41.000000000 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] > Change: 2024-06-24 12:48:00.203000000 +0000
	I0624 05:49:44.953318   14012 command_runner.go:130] >  Birth: -
	I0624 05:49:44.953318   14012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0624 05:49:44.953318   14012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0624 05:49:45.008863   14012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0624 05:49:46.237951   14012 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0624 05:49:46.238026   14012 command_runner.go:130] > daemonset.apps/kindnet configured
	I0624 05:49:46.238026   14012 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2291577s)
	I0624 05:49:46.238026   14012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 05:49:46.238026   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:49:46.238026   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.238026   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.238026   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.239306   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.245532   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.245532   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Audit-Id: a56a3580-71d2-4edb-8079-62f0a6d6f081
	I0624 05:49:46.245532   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.245630   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.245691   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.248019   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87788 chars]
	I0624 05:49:46.255228   14012 system_pods.go:59] 12 kube-system pods found
	I0624 05:49:46.255228   14012 system_pods.go:61] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0624 05:49:46.255228   14012 system_pods.go:61] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:49:46.255228   14012 system_pods.go:61] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0624 05:49:46.255228   14012 system_pods.go:61] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0624 05:49:46.255228   14012 system_pods.go:74] duration metric: took 17.2023ms to wait for pod list to return data ...
	I0624 05:49:46.255228   14012 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:49:46.255228   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes
	I0624 05:49:46.255228   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.255228   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.255228   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.265017   14012 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 05:49:46.265017   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Audit-Id: eff60bee-cc73-4a09-98b1-1973870f0d6b
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.265017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.265017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.265017   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.266281   14012 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0624 05:49:46.267717   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267772   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267885   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267922   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267922   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:49:46.267922   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:49:46.267922   14012 node_conditions.go:105] duration metric: took 12.6941ms to run NodePressure ...
	I0624 05:49:46.267922   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0624 05:49:46.733341   14012 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0624 05:49:46.733341   14012 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0624 05:49:46.733341   14012 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0624 05:49:46.733341   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0624 05:49:46.733341   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.733341   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.733341   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.735121   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.735121   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.735121   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.738205   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.738205   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Audit-Id: aa059ae8-70a5-4242-ae9d-77f31c39dd50
	I0624 05:49:46.738205   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.739655   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1762","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0624 05:49:46.741503   14012 kubeadm.go:733] kubelet initialised
	I0624 05:49:46.741558   14012 kubeadm.go:734] duration metric: took 8.217ms waiting for restarted kubelet to initialise ...
	I0624 05:49:46.741558   14012 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:49:46.741706   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:49:46.741706   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.741706   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.741825   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.745261   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:49:46.745261   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.745261   14012 round_trippers.go:580]     Audit-Id: 5ce3735f-33f4-404f-a2c8-99c3505dc970
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.745573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.745573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.745573   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.747579   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87195 chars]
	I0624 05:49:46.751459   14012 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.752144   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:49:46.752183   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.752183   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.752221   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.754571   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:49:46.755653   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.755653   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.755653   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.755707   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.755707   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.755707   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.755707   14012 round_trippers.go:580]     Audit-Id: 1c3048bd-9a07-43c9-9dcc-6d02058758ef
	I0624 05:49:46.755954   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:49:46.756334   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.756334   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.756334   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.756334   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.757081   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.757081   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.757081   14012 round_trippers.go:580]     Audit-Id: c2192bba-0d48-47ce-9ce1-964ab92394dd
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.759421   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.759421   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.759421   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.759742   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.760275   14012 pod_ready.go:97] node "multinode-876600" hosting pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.760356   14012 pod_ready.go:81] duration metric: took 8.3677ms for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.760356   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.760356   14012 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.760442   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:49:46.760527   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.760527   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.760527   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.770864   14012 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0624 05:49:46.770864   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.771619   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Audit-Id: 4df6a388-9901-44f9-969f-906354682c9d
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.771619   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.771619   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.771791   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1762","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0624 05:49:46.772819   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.772930   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.772930   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.772969   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.773619   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.773619   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.775666   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.775666   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Audit-Id: 7da004c8-f997-47f9-a5d9-4d9cb3d09782
	I0624 05:49:46.775666   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.776131   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.776131   14012 pod_ready.go:97] node "multinode-876600" hosting pod "etcd-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.776131   14012 pod_ready.go:81] duration metric: took 15.7755ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.776131   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "etcd-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.776131   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.776658   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:49:46.776658   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.776813   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.776813   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.778416   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.778416   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.778416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.778416   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.778416   14012 round_trippers.go:580]     Audit-Id: 9c2bf27f-bded-46f4-8d2c-8b9064f1a39c
	I0624 05:49:46.779663   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.779663   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.779663   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.779924   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a1504b-2338-458c-b448-92e8836b479a","resourceVersion":"1763","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.217.139:8443","kubernetes.io/config.hash":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.mirror":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.seen":"2024-06-24T12:49:37.772966703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0624 05:49:46.780288   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.780288   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.780288   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.780288   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.781549   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:49:46.781549   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.781549   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Audit-Id: 2d1c38c8-4153-439c-a42f-f22e94257d2a
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.783415   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.783415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.783487   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.784030   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-apiserver-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.784030   14012 pod_ready.go:81] duration metric: took 7.3721ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.784092   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-apiserver-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.784092   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.784202   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:49:46.784287   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.784287   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.784287   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.785087   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.785087   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.785087   14012 round_trippers.go:580]     Audit-Id: 5293c4ef-1b69-4418-8847-b8a462584079
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.786745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.786745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.786745   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.786790   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"1757","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0624 05:49:46.787601   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:46.787601   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.787601   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.787601   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.788193   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.790419   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Audit-Id: c2fdb3fb-789c-4e1c-b2cc-4181091d3726
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.790419   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.790488   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.790488   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.790488   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.790567   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:46.791096   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-controller-manager-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.791159   14012 pod_ready.go:81] duration metric: took 7.0671ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:46.791159   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-controller-manager-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:46.791159   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:46.935534   14012 request.go:629] Waited for 144.2269ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:49:46.935715   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:49:46.935799   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:46.935839   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:46.935839   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:46.936156   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:46.940161   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:46.940161   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:46.940161   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:46 GMT
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Audit-Id: dfded3e9-0ea8-49ac-8fa7-da4ed8dc7b19
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:46.940161   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:46.940379   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hjjs8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e148504-3300-4591-9576-7c5597851f41","resourceVersion":"609","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0624 05:49:47.142674   14012 request.go:629] Waited for 201.1098ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:49:47.142957   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:49:47.142957   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.142957   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.142957   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.143685   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.143685   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.143685   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.143685   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.147896   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.147896   14012 round_trippers.go:580]     Audit-Id: 377747e2-e78e-4cdf-b4ab-381671704590
	I0624 05:49:47.147949   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.147949   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.147949   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"1674","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3827 chars]
	I0624 05:49:47.148493   14012 pod_ready.go:92] pod "kube-proxy-hjjs8" in "kube-system" namespace has status "Ready":"True"
	I0624 05:49:47.148696   14012 pod_ready.go:81] duration metric: took 357.4616ms for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.148728   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.348129   14012 request.go:629] Waited for 199.1996ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:49:47.348129   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:49:47.348250   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.348250   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.348250   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.354839   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:49:47.354839   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.354839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.354839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Audit-Id: 3c14a3b0-1610-4bc5-8cf8-f908c7669877
	I0624 05:49:47.354839   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.354839   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"1835","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0624 05:49:47.542334   14012 request.go:629] Waited for 186.5757ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:47.542387   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:47.542629   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.542629   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.542721   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.543502   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.543502   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.543502   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.543502   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.543502   14012 round_trippers.go:580]     Audit-Id: c0b9f831-96e5-4efc-beda-c9e73d9e5f13
	I0624 05:49:47.546154   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:47.546865   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-proxy-lcc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:47.546978   14012 pod_ready.go:81] duration metric: took 398.2486ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:47.546978   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-proxy-lcc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:47.546978   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:47.735773   14012 request.go:629] Waited for 188.5213ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:49:47.735963   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:49:47.735963   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.736090   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.736090   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.743317   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:49:47.743745   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Audit-Id: 3ceeecb1-acd1-40b6-8cf9-efad270c8bae
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.743745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.743816   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.743816   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.743816   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.744000   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf7jm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4f99ace-bf94-40d8-b28f-27ec938418ef","resourceVersion":"1727","creationTimestamp":"2024-06-24T12:34:19Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:49:47.935486   14012 request.go:629] Waited for 190.3271ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:49:47.935486   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:49:47.935715   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:47.935758   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:47.935758   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:47.936154   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:47.939520   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:47.939520   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:47.939520   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:47.939520   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:47 GMT
	I0624 05:49:47.939520   14012 round_trippers.go:580]     Audit-Id: 693e992d-c4b1-4cf0-8816-7355a1b8a0ec
	I0624 05:49:47.939575   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:47.939575   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:47.939619   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m03","uid":"1392cc6a-2e48-4bde-9120-b3d99174bf99","resourceVersion":"1740","creationTimestamp":"2024-06-24T12:45:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_45_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0624 05:49:47.940202   14012 pod_ready.go:97] node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:49:47.940202   14012 pod_ready.go:81] duration metric: took 393.1448ms for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:47.940202   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:49:47.940202   14012 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:49:48.142457   14012 request.go:629] Waited for 202.2549ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:49:48.142790   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:49:48.142790   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.142790   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.142790   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.143163   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:48.143163   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.143163   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Audit-Id: dd96a694-b962-400c-ad11-53306a39e259
	I0624 05:49:48.143163   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.147649   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.147649   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.147979   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"1760","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0624 05:49:48.335881   14012 request.go:629] Waited for 186.9997ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.335881   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.335881   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.335881   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.335881   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.336537   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:48.340727   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Audit-Id: 82cc05b1-d644-483a-8cdd-575c6e2cbf34
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.340727   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.340727   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.340808   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.340808   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.341020   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:48.341184   14012 pod_ready.go:97] node "multinode-876600" hosting pod "kube-scheduler-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:48.341184   14012 pod_ready.go:81] duration metric: took 400.9813ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	E0624 05:49:48.341184   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600" hosting pod "kube-scheduler-multinode-876600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600" has status "Ready":"False"
	I0624 05:49:48.341184   14012 pod_ready.go:38] duration metric: took 1.5995451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:49:48.341184   14012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0624 05:49:48.362051   14012 command_runner.go:130] > -16
	I0624 05:49:48.362145   14012 ops.go:34] apiserver oom_adj: -16
	I0624 05:49:48.362145   14012 kubeadm.go:591] duration metric: took 13.4849654s to restartPrimaryControlPlane
	I0624 05:49:48.362145   14012 kubeadm.go:393] duration metric: took 13.565256s to StartCluster
	I0624 05:49:48.362275   14012 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:48.362468   14012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 05:49:48.364136   14012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:49:48.366220   14012 start.go:234] Will wait 6m0s for node &{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0624 05:49:48.366220   14012 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0624 05:49:48.366766   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:49:48.369338   14012 out.go:177] * Verifying Kubernetes components...
	I0624 05:49:48.372128   14012 out.go:177] * Enabled addons: 
	I0624 05:49:48.378854   14012 addons.go:510] duration metric: took 12.6342ms for enable addons: enabled=[]
	I0624 05:49:48.383089   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:49:48.640440   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:49:48.660134   14012 node_ready.go:35] waiting up to 6m0s for node "multinode-876600" to be "Ready" ...
	I0624 05:49:48.668691   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:48.668691   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:48.668871   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:48.668871   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:48.676881   14012 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 05:49:48.676950   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Audit-Id: d7c5b015-4730-4db9-a03d-a722d4567614
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:48.676950   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:48.677005   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:48.677005   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:48.677005   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:48 GMT
	I0624 05:49:48.678047   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:49.165924   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:49.165924   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:49.165924   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:49.165924   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:49.166456   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:49.166456   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:49.170656   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:49 GMT
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Audit-Id: 62a7f463-c9ac-449d-bed2-21d28f00eb10
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:49.170656   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:49.170656   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:49.170984   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:49.669857   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:49.669972   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:49.669972   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:49.669972   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:49.670337   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:49.670337   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Audit-Id: 62a6f727-469d-4dea-89bb-8d1a77b11a57
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:49.670337   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:49.670337   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:49.670337   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:49 GMT
	I0624 05:49:49.675142   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.173144   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:50.173144   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:50.173144   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:50.173144   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:50.173941   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:50.177995   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:50.177995   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:50.177995   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:50 GMT
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Audit-Id: 484a1611-7fd8-4219-8445-4a9fab11bbd9
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:50.177995   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:50.178741   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.663728   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:50.663970   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:50.663970   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:50.663970   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:50.664386   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:50.664386   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Audit-Id: 260f069b-5ebd-4ad9-987a-316613cc0a64
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:50.664386   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:50.664386   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:50.664386   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:50 GMT
	I0624 05:49:50.668776   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:50.669114   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:51.168134   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:51.168374   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:51.168374   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:51.168374   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:51.169194   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:51.169194   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:51.174228   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:51 GMT
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Audit-Id: 261fd437-590d-43df-99d7-4da139b5e3f2
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:51.174228   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:51.174228   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:51.174487   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:51.667217   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:51.667314   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:51.667314   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:51.667314   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:51.667670   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:51.667670   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:51.667670   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:51 GMT
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Audit-Id: 7d3c2e1a-149b-453d-9291-9b7b9bc2dcd5
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:51.670643   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:51.670643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:51.670643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:51.671088   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.176750   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:52.176815   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:52.176815   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:52.176815   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:52.182049   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:49:52.182049   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:52.182049   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:52.182620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:52.182620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:52 GMT
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Audit-Id: 7bcf4e3c-b31c-42a5-9675-aef8c3a0a298
	I0624 05:49:52.182620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:52.182762   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.673135   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:52.673314   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:52.673314   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:52.673314   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:52.673671   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:52.673671   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:52.673671   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:52 GMT
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Audit-Id: 299cc14e-23f4-49b8-a134-ded2343cf342
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:52.673671   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:52.673671   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:52.678451   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:52.679756   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:53.175756   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:53.175928   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:53.175928   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:53.175928   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:53.176750   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:53.176750   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:53.176750   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:53.176750   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:53.180134   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:53.180134   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:53.180134   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:53 GMT
	I0624 05:49:53.180134   14012 round_trippers.go:580]     Audit-Id: 1f380ba4-5bb0-4891-adb8-119db425f568
	I0624 05:49:53.180386   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:53.669943   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:53.670048   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:53.670048   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:53.670048   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:53.670464   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:53.670464   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:53.670464   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:53.670464   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:53 GMT
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Audit-Id: 6c7815bd-1e19-4e1b-9378-30fb9662db04
	I0624 05:49:53.670464   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:53.673585   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:54.166632   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:54.166849   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:54.166849   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:54.166849   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:54.167093   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:54.170706   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Audit-Id: 5f8841c8-88ad-401f-a5b8-ace419a6ff17
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:54.170706   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:54.170706   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:54.170706   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:54 GMT
	I0624 05:49:54.170946   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:54.662878   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:54.663166   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:54.663166   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:54.663166   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:54.663701   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:54.672979   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:54.672979   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:54.672979   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:54 GMT
	I0624 05:49:54.672979   14012 round_trippers.go:580]     Audit-Id: 31ba798a-f2e4-43c7-bbcc-d0e0f697f30d
	I0624 05:49:54.673397   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:55.164099   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:55.164448   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:55.164448   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:55.164448   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:55.164831   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:55.164831   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:55.164831   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:55 GMT
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Audit-Id: b4cd0ced-8f7d-4f98-9778-234f1e8f06c5
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:55.164831   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:55.164831   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:55.170148   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:55.170616   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:55.674312   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:55.674312   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:55.674863   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:55.674863   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:55.692363   14012 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0624 05:49:55.692607   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:55.692607   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:55.692607   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:55 GMT
	I0624 05:49:55.692607   14012 round_trippers.go:580]     Audit-Id: 6cb6be55-f5af-48db-b21c-4b1990795fc4
	I0624 05:49:55.695598   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1753","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0624 05:49:56.160848   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:56.161145   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:56.161145   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:56.161145   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:56.161418   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:56.161418   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:56.166165   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:56.166165   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:56 GMT
	I0624 05:49:56.166165   14012 round_trippers.go:580]     Audit-Id: 920a34a9-2fb6-4283-8d99-8eefd5d38269
	I0624 05:49:56.166433   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:56.668119   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:56.668119   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:56.668119   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:56.668119   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:56.675763   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:49:56.675763   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Audit-Id: d006fba3-00d7-4cbd-885c-f6c5f48ec508
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:56.675861   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:56.675861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:56.675861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:56.675900   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:56 GMT
	I0624 05:49:56.675900   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.162946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:57.162946   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:57.162946   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:57.162946   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:57.163517   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:57.163517   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Audit-Id: de529e7a-3d3a-4af0-be7b-753914b8677e
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:57.163517   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:57.163517   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:57.163517   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:57 GMT
	I0624 05:49:57.167844   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.666210   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:57.666210   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:57.666296   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:57.666296   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:57.666767   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:57.669889   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:57.669889   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:57 GMT
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Audit-Id: 1615bc6a-876f-4aa6-ad89-4f20b16c94b4
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:57.669889   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:57.669889   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:57.670243   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:57.670862   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:49:58.162146   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:58.162146   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:58.162146   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:58.162452   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:58.162694   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:58.162694   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:58.167008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:58.167008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:58 GMT
	I0624 05:49:58.167008   14012 round_trippers.go:580]     Audit-Id: 90a978f6-34a0-4eb3-9ad4-3a73e42e4135
	I0624 05:49:58.167267   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:58.665100   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:58.665185   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:58.665185   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:58.665185   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:58.665913   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:58.665913   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:58 GMT
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Audit-Id: 118e35cb-9e0e-4441-9bc5-5c0f366bb75b
	I0624 05:49:58.665913   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:58.668672   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:58.668672   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:58.668672   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:58.668893   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.180071   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:59.180071   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:59.180071   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:59.180071   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:59.180596   14012 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0624 05:49:59.184430   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:59.184430   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:59.184548   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:59 GMT
	I0624 05:49:59.184548   14012 round_trippers.go:580]     Audit-Id: b6b87fed-c625-49f8-8148-c7f7ec7476f0
	I0624 05:49:59.184639   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:59.184639   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:59.184639   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:59.184639   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.671231   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:49:59.671231   14012 round_trippers.go:469] Request Headers:
	I0624 05:49:59.671303   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:49:59.671303   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:49:59.674918   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:49:59.674918   14012 round_trippers.go:577] Response Headers:
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:49:59.674918   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:49:59.674918   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:49:59 GMT
	I0624 05:49:59.674918   14012 round_trippers.go:580]     Audit-Id: 0b4da115-a3a6-4ed2-849f-7c56ca5bf742
	I0624 05:49:59.675782   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:49:59.676281   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:00.171540   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:00.171540   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:00.171657   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:00.171657   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:00.177224   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:00.177224   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:00.177377   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:00.177377   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:00 GMT
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Audit-Id: 56fc1926-a0ef-4508-912f-1cb667d5e3c2
	I0624 05:50:00.177377   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:00.177685   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:00.670950   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:00.670950   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:00.671094   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:00.671094   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:00.675746   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:00.675746   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:00.675746   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:00.675746   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:00.675746   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:00.675941   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:00.675941   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:00 GMT
	I0624 05:50:00.675941   14012 round_trippers.go:580]     Audit-Id: 908c81d2-d416-4734-b52a-ed5183c0f41c
	I0624 05:50:00.676331   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:01.168397   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:01.168587   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:01.168587   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:01.168587   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:01.172185   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:01.173199   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:01.173225   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:01.173225   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:01 GMT
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Audit-Id: a955f5df-b0e7-4999-9a3a-a48ea9af8c65
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:01.173225   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:01.173884   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:01.669561   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:01.669832   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:01.669832   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:01.669832   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:01.673416   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:01.674088   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:01.674088   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:01 GMT
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Audit-Id: d7376207-fbf1-4cb3-b487-cb31eabbb66d
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:01.674088   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:01.674088   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:01.674916   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:02.171837   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:02.171990   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:02.171990   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:02.171990   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:02.177779   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:02.177779   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:02.177779   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:02.177779   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:02 GMT
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Audit-Id: cc3f2cfb-8a55-4e64-a643-73982e9bc09b
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:02.177779   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:02.178161   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:02.178731   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:02.671829   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:02.671959   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:02.671959   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:02.671959   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:02.676453   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:02.676453   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:02.676453   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:02.676453   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:02 GMT
	I0624 05:50:02.676453   14012 round_trippers.go:580]     Audit-Id: 56700678-8efa-44f3-8d6a-7311f1aa20c0
	I0624 05:50:02.677240   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:03.169974   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:03.170057   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:03.170057   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:03.170057   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:03.173940   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:03.174416   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:03.174416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:03.174416   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:03 GMT
	I0624 05:50:03.174416   14012 round_trippers.go:580]     Audit-Id: 0986f639-a4b4-48a3-bea9-ba2abec3acdc
	I0624 05:50:03.174416   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:03.670342   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:03.670409   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:03.670409   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:03.670478   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:03.677319   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:03.677319   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:03.677319   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:03.677319   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:03 GMT
	I0624 05:50:03.677319   14012 round_trippers.go:580]     Audit-Id: 27f43b91-b04f-4b99-ac99-6fe888b12ba5
	I0624 05:50:03.678836   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.169015   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:04.169015   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:04.169015   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:04.169015   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:04.172627   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:04.172627   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:04.172627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:04 GMT
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Audit-Id: aa8121e1-631a-4a44-b15c-6ad9047b0bcb
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:04.172627   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:04.172627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:04.173791   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.671639   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:04.671639   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:04.671639   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:04.671639   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:04.675238   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:04.675238   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:04.675238   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:04.675238   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:04 GMT
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Audit-Id: 8fab93ea-cce9-470e-8a0e-085eeb9b272e
	I0624 05:50:04.675238   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:04.675238   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:04.676231   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:05.169701   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:05.169701   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:05.169701   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:05.169701   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:05.174374   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:05.174374   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:05 GMT
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Audit-Id: a4324e0d-a73f-4c5f-85d7-9e3da2a74739
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:05.174374   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:05.174374   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:05.174374   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:05.174882   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:05.669404   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:05.669404   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:05.669404   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:05.669404   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:05.674295   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:05.674295   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:05 GMT
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Audit-Id: 110874d4-fb32-4ed1-8b82-26979e7f8f2c
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:05.674379   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:05.674379   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:05.674379   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:05.674634   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:06.167589   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:06.167892   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:06.167892   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:06.167892   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:06.172489   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:06.172489   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:06.172489   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:06.172489   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:06 GMT
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Audit-Id: 0395f95b-8fb8-4463-8699-afc27b3cd268
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:06.172489   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:06.173114   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:06.667281   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:06.667498   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:06.667498   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:06.667498   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:06.671591   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:06.671591   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:06.671591   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:06.671591   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:06.671591   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:06.671778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:06.671778   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:06 GMT
	I0624 05:50:06.671778   14012 round_trippers.go:580]     Audit-Id: 0749ebc1-22b1-47e4-bdd9-2221f6be7be0
	I0624 05:50:06.673043   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:07.166356   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:07.166356   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:07.166356   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:07.166356   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:07.169949   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:07.170949   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:07 GMT
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Audit-Id: aaf62023-da6b-468d-a452-aa8305778f5b
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:07.171020   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:07.171020   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:07.171020   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:07.171519   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:07.172150   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:07.667291   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:07.667528   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:07.667528   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:07.667528   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:07.674865   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:07.674865   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:07.674865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:07 GMT
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Audit-Id: 0e7079ed-d2f9-40a7-ae35-5c0a6826773d
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:07.674865   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:07.674865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:07.674865   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:08.168844   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:08.168844   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:08.168844   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:08.168844   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:08.173800   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:08.174349   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:08.174349   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:08.174349   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:08 GMT
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Audit-Id: 89045b9b-a8bf-48e4-b7b5-89ae293d61c8
	I0624 05:50:08.174349   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:08.174424   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:08.175196   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:08.667807   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:08.668039   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:08.668039   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:08.668039   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:08.671291   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:08.671946   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Audit-Id: b1946d1b-d1e8-4bab-8f71-9e5b66952410
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:08.671946   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:08.671946   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:08.671946   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:08 GMT
	I0624 05:50:08.672208   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:09.170511   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:09.170598   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:09.170598   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:09.170710   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:09.174259   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:09.174974   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Audit-Id: 7be20a32-be06-4121-8209-bbee987eea43
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:09.175055   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:09.175055   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:09.175055   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:09 GMT
	I0624 05:50:09.175236   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:09.175872   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:09.672238   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:09.672238   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:09.672238   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:09.672238   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:09.676705   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:09.677145   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:09.677145   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:09 GMT
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Audit-Id: 49a0505d-a0e9-479c-b880-aca3b0d87646
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:09.677224   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:09.677224   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:09.677224   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:10.170926   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:10.170926   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:10.170926   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:10.170926   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:10.174551   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:10.175511   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:10.175511   14012 round_trippers.go:580]     Audit-Id: a25e9bdd-31fb-4fa0-95bb-5d23174459e5
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:10.175548   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:10.175548   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:10.175548   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:10 GMT
	I0624 05:50:10.175751   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:10.673649   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:10.673832   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:10.673832   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:10.673832   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:10.677738   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:10.678073   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:10 GMT
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Audit-Id: a7ada07d-c756-4e9c-867e-28f5092c6321
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:10.678073   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:10.678073   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:10.678073   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:10.678872   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.161573   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:11.161960   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:11.161960   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:11.161960   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:11.166074   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:11.166074   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:11.166074   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:11.166074   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:11 GMT
	I0624 05:50:11.166074   14012 round_trippers.go:580]     Audit-Id: 90d0b419-1af7-48a7-85f5-c46e4e424fb3
	I0624 05:50:11.166074   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.661847   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:11.661938   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:11.661938   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:11.661938   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:11.666803   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:11.666803   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Audit-Id: 00df1833-64d0-4cce-a5c0-7fb38af0e0ee
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:11.666803   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:11.666803   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:11.666803   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:11 GMT
	I0624 05:50:11.667622   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:11.668152   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:12.175756   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:12.175756   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:12.175982   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:12.175982   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:12.179544   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:12.180293   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:12.180293   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:12.180293   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:12 GMT
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Audit-Id: bb479baa-2ab8-4ecb-9e0a-ae833c7e6680
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:12.180293   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:12.180492   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:12.674727   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:12.674727   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:12.674727   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:12.674727   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:12.677356   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:12.677356   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Audit-Id: 3240f605-ab36-462a-b935-b5371031a773
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:12.677356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:12.677356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:12.677356   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:12 GMT
	I0624 05:50:12.679399   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:13.173187   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:13.173187   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:13.173187   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:13.173187   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:13.176778   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:13.176778   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:13.176778   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:13.176778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:13.176778   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:13 GMT
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Audit-Id: f79f4075-8b79-4e22-abda-78a3c14bdd11
	I0624 05:50:13.177761   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:13.177947   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:13.660489   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:13.660892   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:13.660972   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:13.660972   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:13.665657   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:13.665989   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Audit-Id: f1ae38b2-4293-4b50-ae9c-89dceb3a9d87
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:13.665989   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:13.665989   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:13.665989   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:13 GMT
	I0624 05:50:13.666914   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:14.171787   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:14.171787   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:14.171787   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:14.171787   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:14.174639   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:14.175693   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:14.175693   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:14 GMT
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Audit-Id: fac1a829-354d-4afc-b58f-aa14ebe356f8
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:14.175770   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:14.175770   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:14.175770   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:14.176405   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:14.177160   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:14.671000   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:14.671000   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:14.671070   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:14.671070   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:14.674715   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:14.674715   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:14 GMT
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Audit-Id: 064ee59b-16a0-40aa-b5df-450d9f1c371e
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:14.674715   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:14.674715   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:14.674715   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:14.676532   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:15.171724   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:15.171724   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:15.171927   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:15.171927   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:15.175770   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:15.176620   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:15.176620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:15 GMT
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Audit-Id: 5125f241-aab9-4c8f-8c2e-4aebbcc5fac7
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:15.176620   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:15.176620   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:15.176620   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:15.674193   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:15.674480   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:15.674480   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:15.674557   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:15.678175   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:15.678175   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:15.678175   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:15.678175   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:15 GMT
	I0624 05:50:15.678175   14012 round_trippers.go:580]     Audit-Id: 866b1469-019b-4e69-acd8-f7d4a988a00e
	I0624 05:50:15.678899   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.164296   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:16.164296   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:16.164296   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:16.164296   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:16.169120   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:16.169519   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:16 GMT
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Audit-Id: a421ef8b-ecc6-45c6-953d-c6a354c29a3c
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:16.169519   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:16.169519   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:16.169519   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:16.169727   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.665441   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:16.665441   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:16.665787   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:16.665787   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:16.670109   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:16.670109   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:16.670109   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:16.670109   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:16.670109   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:16 GMT
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Audit-Id: 95ff4e68-19bd-4974-8128-20ca75144d12
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:16.670644   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:16.670999   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:16.671867   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:17.172606   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:17.172606   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:17.172606   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:17.172606   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:17.176405   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:17.176405   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Audit-Id: 3bef91af-a293-40e0-a995-1629c31f3b18
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:17.176405   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:17.177283   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:17.177283   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:17.177283   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:17 GMT
	I0624 05:50:17.177559   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:17.670989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:17.670989   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:17.670989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:17.671123   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:17.674452   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:17.674452   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:17.674452   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:17.674452   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:17.674452   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:17 GMT
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Audit-Id: 1cbb287f-7522-47e1-9c19-1025690b7dda
	I0624 05:50:17.675195   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:17.675365   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.174338   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:18.174338   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:18.174436   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:18.174436   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:18.178161   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:18.178535   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:18.178535   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:18.178535   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:18 GMT
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Audit-Id: d40aa7ee-823b-47d5-bc58-6a266d2014de
	I0624 05:50:18.178535   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:18.178535   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.675785   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:18.675785   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:18.675785   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:18.675785   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:18.683180   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:18.683180   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:18.683180   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:18.683180   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:18 GMT
	I0624 05:50:18.683180   14012 round_trippers.go:580]     Audit-Id: c1c5f106-8963-4032-b7bd-d4c36899d37e
	I0624 05:50:18.683953   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:18.683995   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:19.172788   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:19.172788   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:19.172788   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:19.172788   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:19.176367   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:19.176367   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:19 GMT
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Audit-Id: 5dbbd0c1-58f2-4690-90bc-0a5b31a5e3b1
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:19.177367   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:19.177367   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:19.177367   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:19.177544   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:19.670865   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:19.670865   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:19.670865   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:19.670865   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:19.675521   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:19.675521   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Audit-Id: 25103891-a44d-48cf-94cf-48dd326b8222
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:19.675521   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:19.675521   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:19.675521   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:19 GMT
	I0624 05:50:19.675943   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:20.169540   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:20.169540   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:20.169624   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:20.169624   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:20.172881   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:20.173531   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Audit-Id: 9bb9ed5b-64b0-4365-b6e2-f6478b564542
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:20.173531   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:20.173531   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:20.173531   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:20 GMT
	I0624 05:50:20.173828   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:20.668816   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:20.668904   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:20.668904   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:20.668904   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:20.672251   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:20.672456   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Audit-Id: 2b7cd664-a34f-4102-8de9-e641ccb068db
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:20.672456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:20.672456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:20.672456   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:20 GMT
	I0624 05:50:20.672456   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:21.167670   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:21.167670   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:21.167670   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:21.167670   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:21.171960   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:21.172791   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:21.172791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:21.172791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:21 GMT
	I0624 05:50:21.172791   14012 round_trippers.go:580]     Audit-Id: d9a0a253-8b6c-4b5d-837f-6f38b83b245a
	I0624 05:50:21.172976   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:21.173640   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:21.667815   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:21.667815   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:21.667815   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:21.667815   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:21.671414   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:21.672118   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Audit-Id: c48c8792-da4d-4c89-ae05-48a7658137fb
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:21.672118   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:21.672118   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:21.672118   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:21 GMT
	I0624 05:50:21.672281   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:22.167651   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:22.167651   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:22.167651   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:22.167651   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:22.171237   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:22.171237   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:22.172123   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:22.172123   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:22 GMT
	I0624 05:50:22.172123   14012 round_trippers.go:580]     Audit-Id: e9b3710d-cb0d-4072-ad75-0ac38bee1528
	I0624 05:50:22.172327   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:22.666089   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:22.666234   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:22.666234   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:22.666234   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:22.672882   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:22.672882   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Audit-Id: 33d59337-f7d2-408f-8ece-49433fc51ab1
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:22.672882   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:22.672882   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:22.672882   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:22 GMT
	I0624 05:50:22.673626   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.168025   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:23.168025   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:23.168025   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:23.168025   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:23.171603   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:23.171603   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:23.171603   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:23.171603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:23.171603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:23.171603   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:23 GMT
	I0624 05:50:23.172503   14012 round_trippers.go:580]     Audit-Id: 378aecaa-32a2-4e64-8c30-9c6076e3d44e
	I0624 05:50:23.172503   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:23.172577   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.667920   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:23.667920   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:23.667920   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:23.667920   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:23.672356   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:23.672356   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:23.672356   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:23.672356   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:23.672356   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:23.673197   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:23.673197   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:23 GMT
	I0624 05:50:23.673197   14012 round_trippers.go:580]     Audit-Id: 402f4f65-55c7-4740-936c-51b34d2ff8db
	I0624 05:50:23.673436   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:23.674038   14012 node_ready.go:53] node "multinode-876600" has status "Ready":"False"
	I0624 05:50:24.165837   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.166185   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.166185   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.166331   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.170047   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:24.170099   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.170099   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.170186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Audit-Id: a6d1ea4a-a4c3-4cd4-a4c0-c215014723e5
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.170186   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.170186   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1871","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0624 05:50:24.665215   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.665215   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.665215   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.665215   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.680661   14012 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0624 05:50:24.681213   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Audit-Id: da8499e4-8a12-4d0e-8209-67eb1c36e8c3
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.681213   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.681284   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.681284   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.681284   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.681513   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:24.682078   14012 node_ready.go:49] node "multinode-876600" has status "Ready":"True"
	I0624 05:50:24.682305   14012 node_ready.go:38] duration metric: took 36.0218111s for node "multinode-876600" to be "Ready" ...
	I0624 05:50:24.682305   14012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:50:24.682432   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:50:24.682432   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.682508   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.682508   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.688138   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:24.688861   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.688861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Audit-Id: 2eb217b4-4739-4110-ad39-c0f3608cf259
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.688861   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.688861   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.690532   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1917"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86634 chars]
	I0624 05:50:24.694829   14012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:24.695077   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:24.695077   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.695077   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.695077   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.697739   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:24.697739   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.697739   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.697739   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.698458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.698458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.698458   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.698458   14012 round_trippers.go:580]     Audit-Id: b5d49757-c933-4f1a-af65-38cbae38e997
	I0624 05:50:24.698615   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:24.699730   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:24.699730   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:24.699730   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:24.699814   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:24.712186   14012 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0624 05:50:24.712186   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Audit-Id: c2446dc4-9b69-4d92-b048-79c73eee8c71
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:24.712186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:24.712186   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:24.712186   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:24 GMT
	I0624 05:50:24.712747   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:25.201240   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:25.201306   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.201306   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.201306   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.204762   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:25.205784   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.205784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Audit-Id: ea0edc87-b213-4085-9e92-25ae2e8cb757
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.205784   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.205784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.206037   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:25.206748   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:25.206748   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.206748   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.206748   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.209613   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:25.209613   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.209613   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.210303   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.210303   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Audit-Id: 8e8e1984-5907-4d74-8c5e-9f9f0707450b
	I0624 05:50:25.210303   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.210501   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:25.699513   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:25.699769   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.699769   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.699769   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.706595   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:25.706595   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Audit-Id: f08b3e84-dd33-4e36-938e-b080e07aea16
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.706595   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.706595   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.706595   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.706595   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:25.707286   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:25.707286   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:25.707833   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:25.707833   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:25.710473   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:25.710473   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:25.710848   14012 round_trippers.go:580]     Audit-Id: 5d1a93a6-5a9d-4280-9db5-701c4781644b
	I0624 05:50:25.710848   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:25.710938   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:25.710938   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:25.710938   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:25.710938   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:25 GMT
	I0624 05:50:25.711348   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1915","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0624 05:50:26.200198   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:26.200391   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.200391   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.200391   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.204737   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:26.204737   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Audit-Id: 06a287d8-3587-4edb-855b-8d9c93bd7f26
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.205443   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.205443   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.205443   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.205522   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:26.206305   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:26.206305   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.206437   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.206437   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.209199   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:26.209199   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.209199   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.209199   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.209199   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.209199   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.209767   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.209767   14012 round_trippers.go:580]     Audit-Id: 664c76fc-77ca-4bc3-913f-4f995ab0cb86
	I0624 05:50:26.210063   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:26.696827   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:26.697017   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.697017   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.697017   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.700596   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:26.701652   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.701652   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.701652   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.701652   14012 round_trippers.go:580]     Audit-Id: ff0c947a-4028-4f3d-822d-9f01c1afd2c2
	I0624 05:50:26.702636   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:26.703683   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:26.703683   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:26.703683   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:26.703683   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:26.706269   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:26.706269   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Audit-Id: 6f23babe-d9c7-4d42-82e0-f85251d35b13
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:26.706716   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:26.706716   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:26.706716   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:26.706785   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:26 GMT
	I0624 05:50:26.706785   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:26.707553   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:27.197915   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:27.198031   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.198031   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.198031   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.201446   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:27.201446   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.201446   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.201446   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.202252   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Audit-Id: 0fa62ad7-bf23-481d-9dd5-08191ac0ec4f
	I0624 05:50:27.202252   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.203063   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:27.203659   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:27.203837   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.203837   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.203837   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.206717   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:27.206717   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.206717   14012 round_trippers.go:580]     Audit-Id: a20fc323-db9b-45c5-8970-9218bee9e9b5
	I0624 05:50:27.206717   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.207035   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.207035   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.207035   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.207035   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.207233   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:27.700989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:27.700989   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.700989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.700989   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.705642   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:27.705642   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Audit-Id: afdc4acb-72dc-4458-8736-65ae25f45eec
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.705879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.705879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.705879   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.706086   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:27.706840   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:27.706840   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:27.706840   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:27.706840   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:27.709463   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:27.709865   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Audit-Id: d4cf9f94-a68c-42de-8f38-cdfc1bf7dc0b
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:27.709865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:27.709865   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:27.709865   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:27 GMT
	I0624 05:50:27.709865   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.205427   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:28.205427   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.205427   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.205427   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.209015   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.209867   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.209867   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Audit-Id: 4c439a1c-21a7-4130-8e08-01842e816e0b
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.209867   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.209867   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.210143   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:28.210898   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:28.210898   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.210898   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.210898   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.213498   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:28.213498   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.213498   14012 round_trippers.go:580]     Audit-Id: a21d0dff-b65c-4568-84a0-ae70f339f4de
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.214128   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.214128   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.214128   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.214193   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.706818   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:28.707028   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.707028   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.707028   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.710540   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.711539   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.711539   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.711539   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Audit-Id: ad42d109-8160-4359-8801-8b87ec0f3246
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.711611   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.711786   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:28.713220   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:28.713220   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:28.713220   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:28.713220   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:28.716354   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:28.716354   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Audit-Id: 0981b4a8-eafc-4317-a718-35891a327842
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:28.716354   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:28.716354   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:28.716354   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:28 GMT
	I0624 05:50:28.716854   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:28.717288   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:29.205824   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:29.205824   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.205824   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.205824   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.209409   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.209409   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.209409   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.209409   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Audit-Id: 4f2b54b6-6fb2-485f-a18f-c0d4851a8442
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.210307   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.210307   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.210514   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:29.211425   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:29.211479   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.211479   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.211479   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.215378   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.215378   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.215378   14012 round_trippers.go:580]     Audit-Id: 0c3f0872-2a39-4f40-a16e-c279ba17dacd
	I0624 05:50:29.215378   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.215734   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.215734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.215734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.215734   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.215805   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:29.707929   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:29.707929   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.707929   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.707929   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.711499   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.712287   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.712287   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.712287   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Audit-Id: 34f24d0b-61bd-43fc-ab79-953ecae903ef
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.712287   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.712489   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:29.713428   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:29.713499   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:29.713499   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:29.713499   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:29.716930   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:29.716930   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Audit-Id: a94b4251-0329-450f-ad40-2bca3ec91384
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:29.717321   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:29.717321   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:29.717321   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:29 GMT
	I0624 05:50:29.717814   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:30.200411   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:30.200495   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.200495   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.200495   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.204954   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:30.204954   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Audit-Id: d421138f-9626-4abc-ac15-92729819e340
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.204954   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.204954   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.204954   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.205912   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:30.206744   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:30.206744   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.206744   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.206744   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.210402   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:30.210402   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.210603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.210603   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Audit-Id: 145a0ab9-3781-4499-bb99-6dd25eacb5f8
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.210603   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.211162   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:30.698512   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:30.698751   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.698751   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.698751   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.702362   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:30.702362   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.702362   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.702362   14012 round_trippers.go:580]     Audit-Id: 329b231c-1991-48a8-b309-d8337234b734
	I0624 05:50:30.702839   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.702839   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.702839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.702839   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.703226   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:30.703980   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:30.703980   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:30.703980   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:30.703980   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:30.706573   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:30.706573   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:30.706573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:30 GMT
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Audit-Id: 96a23bf9-9d98-4b0e-a6a3-966db6111d70
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:30.706573   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:30.706573   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:30.707753   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:31.200516   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:31.200516   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.200516   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.200716   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.204154   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:31.204154   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Audit-Id: b387e48c-1685-4c7d-9905-16dc24a703d2
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.204154   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.204154   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.204154   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.205158   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:31.205158   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:31.205158   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.205158   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.205158   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.210162   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:31.210162   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.211022   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.211022   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.211022   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.211090   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.211090   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.211090   14012 round_trippers.go:580]     Audit-Id: 585396ea-c0b0-4486-a07d-960cbe7d07ad
	I0624 05:50:31.211090   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:31.211678   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:31.703952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:31.704078   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.704078   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.704185   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.708746   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:31.708847   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Audit-Id: 4b636e50-b83e-4f31-9d83-c36035928e0c
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.708847   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.708847   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.708847   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.708847   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:31.709952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:31.710119   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:31.710119   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:31.710119   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:31.716371   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:31.716371   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Audit-Id: 212ac423-040e-4d4d-9e69-2bbab8a42c91
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:31.716371   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:31.716371   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:31.716371   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:31 GMT
	I0624 05:50:31.716371   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:32.204274   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:32.204373   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.204373   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.204373   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.212050   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:32.212050   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Audit-Id: fb944d1e-24ff-4de9-8e16-aee724f9012d
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.212050   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.212050   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.212050   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.212050   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:32.213003   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:32.213003   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.213003   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.213003   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.215627   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:32.215627   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.215627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.215627   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.215627   14012 round_trippers.go:580]     Audit-Id: 32eb5cc6-dc35-47f0-864e-9c741293901e
	I0624 05:50:32.216631   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:32.703553   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:32.703553   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.703553   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.703553   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.708316   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:32.708602   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.709490   14012 round_trippers.go:580]     Audit-Id: fa89fba0-50c4-4d76-b5b9-594c0467a973
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.709536   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.709536   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.709536   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.709750   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:32.710575   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:32.710575   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:32.710575   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:32.710575   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:32.715892   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:32.715892   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Audit-Id: 1a5da55d-e4d8-437a-b317-910f8947a8d3
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:32.715892   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:32.716142   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:32.716142   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:32.716142   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:32 GMT
	I0624 05:50:32.716235   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:33.205645   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:33.205645   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.205645   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.205645   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.210119   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:33.210504   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Audit-Id: 7528eb69-c4c8-4edc-bb1a-fd1490daa2e7
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.210504   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.210504   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.210504   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.210570   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.210570   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:33.211456   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:33.211456   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.211456   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.211456   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.213966   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:33.213966   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Audit-Id: 279fbd76-29dc-44f0-82d0-445d58ce0faf
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.213966   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.213966   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.214688   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.214688   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.214999   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:33.215531   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:33.703294   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:33.703365   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.703365   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.703365   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.707963   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:33.707963   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Audit-Id: 1d218b8a-7e2b-485b-a44e-b540ca3251b9
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.708378   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.708419   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.708419   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.708419   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.708419   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:33.709179   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:33.709263   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:33.709263   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:33.709263   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:33.711593   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:33.712503   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Audit-Id: b7aeded1-d61e-4024-ae09-ca03a8185597
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:33.712560   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:33.712560   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:33.712560   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:33 GMT
	I0624 05:50:33.712956   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:34.204616   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:34.204616   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.204616   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.204616   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.209245   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:34.209804   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.209804   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.209804   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.209804   14012 round_trippers.go:580]     Audit-Id: ae1cb11a-f417-46f7-be76-a424d38228d1
	I0624 05:50:34.210072   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:34.210925   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:34.210925   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.210925   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.210925   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.220772   14012 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0624 05:50:34.220844   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.220916   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.220916   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.220916   14012 round_trippers.go:580]     Audit-Id: 1274f919-84d9-4a5a-9faa-d9d19c4b8db4
	I0624 05:50:34.220916   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:34.705307   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:34.705307   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.705307   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.705307   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.708886   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:34.708886   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.708886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Audit-Id: 19854679-820d-4405-93e0-b9d16ac62e84
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.708886   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.709104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.709335   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:34.709956   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:34.709956   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:34.709956   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:34.709956   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:34.712977   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:34.712977   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:34 GMT
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Audit-Id: 28fa2005-a22f-4e02-941c-93f9ed318053
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:34.712977   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:34.712977   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:34.712977   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:34.713627   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:35.206258   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:35.206258   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.206258   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.206258   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.210868   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:35.211620   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.211620   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.211620   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.211702   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.211702   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.211702   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.211758   14012 round_trippers.go:580]     Audit-Id: e590013f-20a3-4b9b-9f9a-b2926e452d17
	I0624 05:50:35.211984   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:35.212741   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:35.212741   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.212741   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.212741   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.215667   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:35.215667   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Audit-Id: e44512ae-24b5-4187-afcd-5a45424ee18c
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.215667   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.216582   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.216582   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.216582   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.217062   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:35.217700   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:35.706689   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:35.706775   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.706775   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.706775   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.712902   14012 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0624 05:50:35.713497   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.713497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.713497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.713497   14012 round_trippers.go:580]     Audit-Id: 29f3114a-2168-4e82-b2de-b05e040628d5
	I0624 05:50:35.713553   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:35.714733   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:35.714763   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:35.714763   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:35.714763   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:35.717745   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:35.717745   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Audit-Id: 89fa1007-484f-41ec-b73d-5070001985d6
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:35.717745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:35.717745   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:35.717745   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:35 GMT
	I0624 05:50:35.717745   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:36.206229   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:36.206229   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.206229   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.206229   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.209811   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:36.209811   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.209811   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.210814   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.210814   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.210840   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.210840   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.210840   14012 round_trippers.go:580]     Audit-Id: 710b5fd5-30af-417a-96f6-6d4fce0cc144
	I0624 05:50:36.211048   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:36.211874   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:36.211982   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.211982   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.212056   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.215511   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:36.215511   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Audit-Id: 4e3c82de-95a4-4378-a597-a4de8b7c0869
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.215511   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.215511   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.215511   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.215511   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:36.707883   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:36.707934   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.707934   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.707934   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.712534   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:36.712777   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Audit-Id: ad8f8c9d-041b-447e-88e3-10a93e4ff54c
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.712777   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.712777   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.712777   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.712900   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:36.713636   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:36.713801   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:36.713801   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:36.713801   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:36.717040   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:36.717235   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:36.717337   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:36 GMT
	I0624 05:50:36.717405   14012 round_trippers.go:580]     Audit-Id: cbc4045c-2eee-4688-8de2-9c13ceb5c546
	I0624 05:50:36.717568   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:36.717656   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:36.717734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:36.717734   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:36.718060   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.208216   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:37.208216   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.208216   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.208334   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.212617   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:37.212719   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.212719   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.212719   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.212719   14012 round_trippers.go:580]     Audit-Id: 09124a77-fa51-4249-b4be-b8853c515223
	I0624 05:50:37.212982   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:37.213828   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:37.213828   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.213926   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.213926   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.215962   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:37.215962   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Audit-Id: da14a930-243f-4097-a70f-84a0fd683211
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.215962   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.215962   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.216456   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.216456   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.216831   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.708738   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:37.709155   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.709155   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.709155   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.712965   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:37.712965   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Audit-Id: ee670f3b-eb92-4c78-b8b0-5a3567c773f9
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.713835   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.713835   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.713835   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.714113   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:37.714797   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:37.714868   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:37.714868   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:37.714868   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:37.717183   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:37.717183   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:37.717757   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:37.717757   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:37 GMT
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Audit-Id: d150471b-3aee-4bca-81a6-4510945efa23
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:37.717757   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:37.718450   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:37.719393   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:38.198152   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:38.198152   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.198287   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.198287   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.202550   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:38.202550   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Audit-Id: fbe767cb-dde8-4e58-bde4-1d433ffbc7e3
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.202550   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.202550   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.202733   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.202733   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.204028   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:38.204853   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:38.204937   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.204937   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.204937   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.207899   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:38.208305   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.208305   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Audit-Id: 4235aa7d-71ca-4eea-a40c-75a82628484e
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.208305   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.208305   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.208305   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:38.699443   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:38.699515   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.699515   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.699515   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.703956   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:38.703956   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.703956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Audit-Id: ce85e556-36f9-4c50-a361-927f8c860ef5
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.703956   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.703956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.704514   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:38.705414   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:38.705526   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:38.705526   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:38.705526   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:38.708784   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:38.708784   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:38.708784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:38.708784   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:38 GMT
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Audit-Id: 5ca3fc1c-9fc4-4f5b-aaed-8d33c9dcfb12
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:38.708784   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:38.710233   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:39.200289   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:39.200289   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.200289   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.200289   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.203343   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:39.203343   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.203343   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.203343   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Audit-Id: 27f09fe9-1278-49a9-bd93-f2479893009e
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.203343   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.204766   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:39.205690   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:39.205690   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.205690   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.205800   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.208864   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:39.209737   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.209876   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.209876   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Audit-Id: ffab95d1-a6a0-4c5a-970f-45c4796da043
	I0624 05:50:39.209876   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.210236   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.210468   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:39.700485   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:39.700485   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.700485   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.700485   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.704102   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:39.704102   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Audit-Id: 9866506b-b0de-48b4-8537-749774e85c66
	I0624 05:50:39.704102   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.704998   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.704998   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.704998   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.705221   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:39.705952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:39.706016   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:39.706016   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:39.706016   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:39.708469   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:39.709224   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:39.709224   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:39.709361   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:39.709487   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:39.709556   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:39 GMT
	I0624 05:50:39.709609   14012 round_trippers.go:580]     Audit-Id: 0bd168a9-9a43-4686-87f5-65031b4d49d8
	I0624 05:50:39.709609   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:39.709609   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:40.202101   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:40.202101   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.202101   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.202101   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.205697   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.205697   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.205697   14012 round_trippers.go:580]     Audit-Id: e3af3f2b-a70a-4174-9597-a6750bf84e46
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.206525   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.206525   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.206525   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.206725   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:40.207638   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:40.207638   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.207638   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.207638   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.209570   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:50:40.209570   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.209570   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Audit-Id: 584911ba-6f06-46c1-8580-58d67b06ced1
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.210482   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.210482   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.210482   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.210734   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:40.210802   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:40.702952   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:40.703219   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.703219   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.703219   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.707017   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.707017   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.707017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.707017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Audit-Id: 135a95cf-2709-4fcc-83fb-099ce4a1348c
	I0624 05:50:40.707017   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.707656   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:40.707860   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:40.707860   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:40.708442   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:40.708442   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:40.712055   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:40.712055   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:40.712496   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:40.712496   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:40 GMT
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Audit-Id: f09dca58-aad2-4c4a-8412-4c7dcf6d84ea
	I0624 05:50:40.712496   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:40.712763   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:41.204810   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:41.204810   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.204810   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.204810   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.208421   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:41.209322   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.209435   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Audit-Id: 80b0575b-3b2b-4cfb-9e5c-6d51bff348e7
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.209458   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.209458   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.209675   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:41.210592   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:41.210702   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.210702   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.210702   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.212980   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:41.212980   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Audit-Id: 46187cdc-a9a0-46b4-b980-affdc2ac6c93
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.213879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.213879   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.213879   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.214028   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:41.701136   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:41.701308   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.701308   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.701308   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.705705   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:41.705705   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.705705   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.705926   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Audit-Id: 27310dbc-f40b-461b-b82a-61f3a4db8778
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.705926   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.706010   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:41.706854   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:41.706917   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:41.706917   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:41.706917   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:41.709606   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:41.710316   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:41.710316   14012 round_trippers.go:580]     Audit-Id: 7fb44fe7-126b-4811-ae10-63715e7b6705
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:41.710396   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:41.710396   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:41.710396   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:41 GMT
	I0624 05:50:41.710396   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:42.200824   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:42.200824   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.200824   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.200909   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.204104   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.204104   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Audit-Id: f95af931-0962-4019-8a34-b8dfe825ec27
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.204104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.204104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.204104   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.205430   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:42.206384   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:42.206483   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.206483   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.206483   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.209836   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.209836   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.209836   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Audit-Id: 5efd11b1-6e20-43ea-9301-58346c266c6d
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.209836   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.209836   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.210647   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:42.211106   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:42.701897   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:42.701978   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.701978   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.701978   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.705406   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:42.705406   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Audit-Id: ebc7bf6c-cdba-41a4-b8eb-c905c93c54f2
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.706160   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.706160   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.706160   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.706436   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:42.706778   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:42.706778   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:42.706778   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:42.706778   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:42.715514   14012 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0624 05:50:42.716402   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:42 GMT
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Audit-Id: e204d810-5631-4abc-b839-680590d1f034
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:42.716402   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:42.716402   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:42.716402   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:42.716985   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:43.201070   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:43.201146   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.201146   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.201146   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.205426   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:43.205426   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Audit-Id: dab85ba5-bd04-44f6-9788-a99ae6687789
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.205754   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.205754   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.205754   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.206079   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:43.207024   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:43.207087   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.207087   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.207087   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.209410   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:43.209410   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Audit-Id: 3f85bbdc-ec45-45a5-a97d-18cbf30e73bf
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.209410   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.209410   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.209410   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.210790   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:43.702606   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:43.702606   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.702606   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.702606   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.707213   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:43.707213   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.707302   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.707302   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.707302   14012 round_trippers.go:580]     Audit-Id: a8cae300-77f7-44ad-9db0-71a6de5c326c
	I0624 05:50:43.708225   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:43.708417   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:43.708417   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:43.708417   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:43.708417   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:43.712061   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:43.712061   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:43.712061   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:43 GMT
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Audit-Id: 6c55b8f8-0514-4750-8e48-2fc390a39b24
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:43.712204   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:43.712204   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:43.712458   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:44.203845   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:44.203845   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.203845   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.203845   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.207425   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:44.207512   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.207512   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.207512   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.207512   14012 round_trippers.go:580]     Audit-Id: b8ef4fbf-2d35-4f10-8316-27065d9db5eb
	I0624 05:50:44.207695   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:44.208587   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:44.208587   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.208659   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.208659   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.211599   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.211791   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.211791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Audit-Id: 7961e8fa-5329-4b0c-9f6e-20630bb4aa77
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.211791   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.211791   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.212673   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:44.213164   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:44.703088   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:44.703349   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.703349   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.703349   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.705789   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.705789   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.705789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.705789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Audit-Id: 29fb7f5b-8a90-43b5-a0ed-99defd64dcac
	I0624 05:50:44.705789   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.707214   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:44.707992   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:44.707992   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:44.707992   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:44.707992   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:44.710576   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:44.710576   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:44.710576   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:44.710576   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:44 GMT
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Audit-Id: 69e4d8ec-200c-45fb-8ac0-dabb9af5b0a4
	I0624 05:50:44.710576   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:44.711275   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:44.711658   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:45.199946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:45.200036   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.200036   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.200036   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.204965   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:45.205294   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Audit-Id: 3d6a5403-10c7-4ace-b7a2-b7779ee91153
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.205294   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.205294   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.205294   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.206275   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:45.206988   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:45.206988   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.206988   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.206988   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.208595   14012 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0624 05:50:45.209746   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.209746   14012 round_trippers.go:580]     Audit-Id: dcc81d8e-c448-45bd-9026-32ef5256d02a
	I0624 05:50:45.209746   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.209810   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.209810   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.209810   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.209810   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.210223   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:45.697743   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:45.697999   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.697999   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.697999   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.701347   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:45.701347   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.701347   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.701347   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Audit-Id: 64a5cc30-842b-4df1-bc50-af1c5a5658e9
	I0624 05:50:45.701347   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.703060   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:45.703987   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:45.704048   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:45.704105   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:45.704105   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:45.707104   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:45.707104   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:45 GMT
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Audit-Id: 17d05764-09c2-466e-84d1-8807d124a4d3
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:45.707104   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:45.707104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:45.707104   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:45.708862   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.199612   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:46.199612   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.199612   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.199612   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.203191   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:46.203191   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Audit-Id: 6d38495c-1595-42a2-9d0a-45a51ece0e96
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.203191   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.203191   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.203956   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.203956   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.204296   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:46.205193   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:46.205238   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.205238   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.205238   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.207807   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:46.207807   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.207807   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Audit-Id: db5a7b83-4c8b-4cd7-8c5a-25ff629ad507
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.207807   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.207807   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.209301   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.698687   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:46.698758   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.698758   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.698758   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.703008   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:46.703008   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Audit-Id: 267c13d1-5975-4d40-9cec-ed87f9a99293
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.703008   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.703008   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.703146   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.703146   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.703358   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:46.704101   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:46.704101   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:46.704192   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:46.704192   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:46.709136   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:46.709732   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:46.709732   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:46 GMT
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Audit-Id: e7701e6f-c8f2-4e63-98fd-4ba86b63b7b4
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:46.709732   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:46.709732   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:46.709879   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:46.710580   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:47.201604   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:47.201604   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.201604   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.201777   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.206181   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:47.206355   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Audit-Id: 37f96e35-2021-418e-a347-7dd4a96c0724
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.206355   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.206355   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.206355   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.206653   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:47.207425   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:47.207425   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.207425   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.207425   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.213077   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:47.213077   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.213077   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.213077   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Audit-Id: 9e737c44-23db-40d5-bab1-401986426d75
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.213077   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.213077   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:47.699238   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:47.699278   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.699278   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.699278   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.702898   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:47.702898   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.703845   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.703845   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.703886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.703886   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.703886   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.703886   14012 round_trippers.go:580]     Audit-Id: b8c95419-2597-4b55-a78e-72f849be61c6
	I0624 05:50:47.704099   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:47.704759   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:47.704759   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:47.704759   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:47.704759   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:47.706798   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:47.707849   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Audit-Id: fe8adc43-07a7-4da0-94df-74cdfbd9687a
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:47.707849   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:47.707849   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:47.707849   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:47 GMT
	I0624 05:50:47.708228   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.204359   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:48.204431   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.204431   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.204431   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.208385   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:48.208385   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.208385   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.208385   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Audit-Id: ef598147-1fcf-4bda-85ab-0c10cd9fd175
	I0624 05:50:48.208385   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.208871   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:48.209967   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:48.209967   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.209967   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.210032   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.214255   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:48.214405   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.214405   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Audit-Id: 9568c26a-2a32-4085-8908-e71a0179feb3
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.214405   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.214405   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.214911   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.696805   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:48.696901   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.696901   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.696901   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.702757   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:48.702853   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.702853   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.702853   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.702853   14012 round_trippers.go:580]     Audit-Id: b1bcac8d-f350-47f9-83a4-bbcd7b6e1a59
	I0624 05:50:48.703038   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:48.703927   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:48.703927   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:48.703927   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:48.703927   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:48.708858   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:48.709789   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Audit-Id: 3c325530-0a95-493e-8c6d-2a4015f5766d
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:48.709789   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:48.709789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:48.709789   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:48.709859   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:48 GMT
	I0624 05:50:48.711154   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:48.711620   14012 pod_ready.go:102] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"False"
	I0624 05:50:49.203832   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:49.204012   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.204012   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.204082   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.209540   14012 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0624 05:50:49.210435   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.210435   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Audit-Id: df743d75-5896-4d8d-ae9f-a629513f97d2
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.210435   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.210509   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.210768   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1764","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0624 05:50:49.211541   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:49.211600   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.211600   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.211600   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.214276   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:49.214276   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Audit-Id: 14573cd5-79aa-4ce0-bab3-200d2ccd6c2a
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.214276   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.214276   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.214276   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.215242   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:49.704933   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:49.704933   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.705000   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.705000   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.720000   14012 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0624 05:50:49.720263   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.720263   14012 round_trippers.go:580]     Audit-Id: b6f0ff52-d323-4fef-ab68-7082b5ce5f06
	I0624 05:50:49.720263   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.720364   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.720364   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.720364   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.720364   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.720577   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1952","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0624 05:50:49.721169   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:49.721169   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:49.721169   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:49.721169   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:49.725921   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:49.725921   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Audit-Id: 509f524e-1cc2-4b71-9a15-bb37cdfb2532
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:49.725921   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:49.725921   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:49.725921   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:49.726464   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:49 GMT
	I0624 05:50:49.726967   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.208274   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-sq7g6
	I0624 05:50:50.208274   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.208274   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.208274   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.211874   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.211874   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.211874   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.211874   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.211874   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Audit-Id: 6875be7c-8d78-47b1-8fd6-ede70aed85ee
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.212615   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.213149   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0624 05:50:50.214241   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.214241   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.214241   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.214241   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.217093   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.217093   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.217093   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.217345   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.217345   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Audit-Id: 49c6a5a4-cd7e-4780-8b5e-1466a5d80688
	I0624 05:50:50.217345   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.217510   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.217510   14012 pod_ready.go:92] pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.218052   14012 pod_ready.go:81] duration metric: took 25.5230519s for pod "coredns-7db6d8ff4d-sq7g6" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.218052   14012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.218217   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-876600
	I0624 05:50:50.218217   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.218217   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.218217   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.222411   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:50.222547   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.222547   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.222547   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.222606   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.222606   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.222606   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.222626   14012 round_trippers.go:580]     Audit-Id: da8ab028-99ed-49a4-b0e6-0f810bf7c8de
	I0624 05:50:50.222842   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-876600","namespace":"kube-system","uid":"c5bc6108-18d3-4bf9-8b39-a020f13cfefb","resourceVersion":"1853","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.31.217.139:2379","kubernetes.io/config.hash":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.mirror":"3fd3eb9408db2ef91e6f7d911ed85123","kubernetes.io/config.seen":"2024-06-24T12:49:37.824434229Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0624 05:50:50.223405   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.223523   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.223523   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.223523   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.227168   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.227168   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Audit-Id: 5c1b6e9e-798b-45f4-82bd-71c0bf1da5bc
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.227168   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.227168   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.227168   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.227168   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.227917   14012 pod_ready.go:92] pod "etcd-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.227917   14012 pod_ready.go:81] duration metric: took 9.8651ms for pod "etcd-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.227917   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.227917   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-876600
	I0624 05:50:50.227917   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.227917   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.227917   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.230491   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.230491   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Audit-Id: cf0fb134-b92b-40e0-b6fe-da7f623af6d8
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.230491   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.230491   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.230491   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.231030   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-876600","namespace":"kube-system","uid":"52a1504b-2338-458c-b448-92e8836b479a","resourceVersion":"1846","creationTimestamp":"2024-06-24T12:49:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.31.217.139:8443","kubernetes.io/config.hash":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.mirror":"3038ef4054f2a74be3ac6770afa89a1a","kubernetes.io/config.seen":"2024-06-24T12:49:37.772966703Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:49:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0624 05:50:50.231643   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.231734   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.231734   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.231734   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.234071   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.234559   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.234559   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.234559   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.234559   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.234559   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.234613   14012 round_trippers.go:580]     Audit-Id: df3ad430-1866-42b0-8bfd-d801319ce2e5
	I0624 05:50:50.234613   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.234647   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.234647   14012 pod_ready.go:92] pod "kube-apiserver-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.235250   14012 pod_ready.go:81] duration metric: took 7.3325ms for pod "kube-apiserver-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.235250   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.235444   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-876600
	I0624 05:50:50.235509   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.235509   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.235509   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.238315   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.238315   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.238315   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.238315   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.238315   14012 round_trippers.go:580]     Audit-Id: af10861d-392a-4f44-b4b8-286e7c1e4cda
	I0624 05:50:50.238713   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.238713   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.238713   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.238816   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-876600","namespace":"kube-system","uid":"ce6cdb16-15c7-48bf-9141-2e1a39212098","resourceVersion":"1858","creationTimestamp":"2024-06-24T12:26:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.mirror":"a20f51e7dce32bda1f77fbfb30315284","kubernetes.io/config.seen":"2024-06-24T12:26:19.276205807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0624 05:50:50.239620   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.239620   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.239620   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.239729   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.242415   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.242415   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Audit-Id: 6fe68ddc-c0bc-4307-8fac-49c1f78e2bef
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.242415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.242415   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.242415   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.242780   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.243319   14012 pod_ready.go:92] pod "kube-controller-manager-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.243429   14012 pod_ready.go:81] duration metric: took 8.1145ms for pod "kube-controller-manager-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.243490   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.243618   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjjs8
	I0624 05:50:50.243664   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.243664   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.243664   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.247358   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.247494   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.247494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.247494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Audit-Id: 2f68711f-b479-4a0f-b39a-045b1c99f7b5
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.247494   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.247803   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hjjs8","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e148504-3300-4591-9576-7c5597851f41","resourceVersion":"1939","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:50:50.247803   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m02
	I0624 05:50:50.248331   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.248331   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.248331   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.250376   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:50.250376   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.250376   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Audit-Id: ec0fd1fa-fcfd-49b0-a0f5-eeea8ac968a3
	I0624 05:50:50.250376   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.251017   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.251017   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.251235   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m02","uid":"c12e405b-fea8-4538-af14-83248535d228","resourceVersion":"1943","creationTimestamp":"2024-06-24T12:29:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_29_42_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:29:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0624 05:50:50.251704   14012 pod_ready.go:97] node "multinode-876600-m02" hosting pod "kube-proxy-hjjs8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m02" has status "Ready":"Unknown"
	I0624 05:50:50.251704   14012 pod_ready.go:81] duration metric: took 8.2144ms for pod "kube-proxy-hjjs8" in "kube-system" namespace to be "Ready" ...
	E0624 05:50:50.251704   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m02" hosting pod "kube-proxy-hjjs8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m02" has status "Ready":"Unknown"
	I0624 05:50:50.251795   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.412696   14012 request.go:629] Waited for 160.6528ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:50:50.412899   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lcc9v
	I0624 05:50:50.412899   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.412899   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.413024   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.420711   14012 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0624 05:50:50.420711   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Audit-Id: 299d6a0e-4928-45ca-ba8b-ac6502375d69
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.420711   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.420711   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.420711   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.421674   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lcc9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"038c238e-3e2b-4d31-a68c-64bf29863d8f","resourceVersion":"1835","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0624 05:50:50.617508   14012 request.go:629] Waited for 194.9795ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.617694   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:50.617694   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.617694   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.617694   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.622257   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:50.622484   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.622484   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.622484   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.622484   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Audit-Id: 1aa639b0-062e-4be3-b537-db1e3604ea22
	I0624 05:50:50.622534   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.622864   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:50.623554   14012 pod_ready.go:92] pod "kube-proxy-lcc9v" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:50.623554   14012 pod_ready.go:81] duration metric: took 371.758ms for pod "kube-proxy-lcc9v" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.623554   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:50.821681   14012 request.go:629] Waited for 197.8096ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:50:50.821946   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf7jm
	I0624 05:50:50.821946   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:50.821946   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:50.821946   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:50.825504   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:50.826314   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Audit-Id: e1b8c870-5a55-4a8c-9b00-1fc656c01133
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:50.826314   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:50.826314   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:50.826314   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:50.826595   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf7jm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4f99ace-bf94-40d8-b28f-27ec938418ef","resourceVersion":"1727","creationTimestamp":"2024-06-24T12:34:19Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bb1d997e-6577-463e-a401-3b630a0b3596","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb1d997e-6577-463e-a401-3b630a0b3596\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0624 05:50:51.009270   14012 request.go:629] Waited for 181.7474ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:50:51.009373   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600-m03
	I0624 05:50:51.009373   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.009373   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.009373   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.013220   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:50:51.014236   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Audit-Id: dca87f8d-5b45-4ca4-8340-ac8714659904
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.014236   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.014279   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.014279   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.014279   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:50 GMT
	I0624 05:50:51.014706   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600-m03","uid":"1392cc6a-2e48-4bde-9120-b3d99174bf99","resourceVersion":"1891","creationTimestamp":"2024-06-24T12:45:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_24T05_45_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0624 05:50:51.014706   14012 pod_ready.go:97] node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:50:51.015284   14012 pod_ready.go:81] duration metric: took 391.6036ms for pod "kube-proxy-wf7jm" in "kube-system" namespace to be "Ready" ...
	E0624 05:50:51.015284   14012 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-876600-m03" hosting pod "kube-proxy-wf7jm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-876600-m03" has status "Ready":"Unknown"
	I0624 05:50:51.015499   14012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:51.213376   14012 request.go:629] Waited for 197.8157ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:50:51.213549   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-876600
	I0624 05:50:51.213651   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.213651   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.213742   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.218086   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:50:51.218684   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:51 GMT
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Audit-Id: b9b097ad-d339-43e2-86b9-d986d6804896
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.218684   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.218684   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.218684   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.218868   14012 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-876600","namespace":"kube-system","uid":"90049cc9-8d7b-4f11-8126-038131eafec1","resourceVersion":"1848","creationTimestamp":"2024-06-24T12:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.mirror":"50c7b7ba99620272d80c509bd4d93e67","kubernetes.io/config.seen":"2024-06-24T12:26:27.293353655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0624 05:50:51.417429   14012 request.go:629] Waited for 197.8367ms due to client-side throttling, not priority and fairness, request: GET:https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:51.417851   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes/multinode-876600
	I0624 05:50:51.417851   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:51.417851   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:51.417851   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:51.420821   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:51.421494   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Audit-Id: 5c4da465-9bef-4803-b32c-e3eb42b083cd
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:51.421494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:51.421494   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:51.421494   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:51 GMT
	I0624 05:50:51.421757   14012 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-24T12:26:23Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0624 05:50:51.422416   14012 pod_ready.go:92] pod "kube-scheduler-multinode-876600" in "kube-system" namespace has status "Ready":"True"
	I0624 05:50:51.422466   14012 pod_ready.go:81] duration metric: took 406.9049ms for pod "kube-scheduler-multinode-876600" in "kube-system" namespace to be "Ready" ...
	I0624 05:50:51.422557   14012 pod_ready.go:38] duration metric: took 26.740062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0624 05:50:51.422557   14012 api_server.go:52] waiting for apiserver process to appear ...
	I0624 05:50:51.432044   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:51.458761   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:51.458761   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:51.467978   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:51.496085   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:51.496085   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:51.504069   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:51.527791   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:51.527791   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:51.527915   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:51.536556   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:51.557989   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:51.557989   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:51.557989   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:51.567037   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:51.588414   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:51.588414   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:51.588414   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:51.596415   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:51.620411   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:51.620411   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:51.620411   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:51.628442   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:51.651409   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:51.651409   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:51.652620   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:51.652620   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:51.652712   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:51.884183   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:51.884183   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:51.884183   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.884183   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.884183   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:51.884183   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:51.884183   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.884183   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.884183   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:51.884183   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.884183   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:44 +0000
	I0624 05:50:51.884183   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.884183   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:51.884739   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:51.884739   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:51.884739   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:51.884739   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:51.884860   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:51.884860   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.884959   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:51.885033   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:51.885033   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.885076   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.885076   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.885076   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.885076   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.885076   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.885076   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.885076   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.885076   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.885199   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:51.885199   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.885199   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.885199   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.885305   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.885344   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:51.885344   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:51.885344   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:51.885384   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.885409   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.885409   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:51.885409   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0624 05:50:51.885409   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0624 05:50:51.885477   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0624 05:50:51.885477   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0624 05:50:51.885548   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:51.885582   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.885648   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:51.885648   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:51.885648   14012 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0624 05:50:51.885648   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:51.885715   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:51.885715   14012 command_runner.go:130] > Events:
	I0624 05:50:51.885715   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:51.885715   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:51.885715   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:51.885785   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.885852   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:51.885880   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:51.885913   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:51.885913   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.885938   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:51.886012   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:51.886012   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:51.886012   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:51.886079   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.886106   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.886106   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.886138   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:51.886187   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.886187   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.886218   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.886218   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:51.886218   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:51.886218   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:51.886218   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.886218   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:51.886218   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.886218   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:51.886218   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:51.886218   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.886218   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:51.886218   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:51.886218   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.886218   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.886218   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.886218   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.886218   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.886218   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.886218   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.886743   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:51.886743   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.886743   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.886743   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.886928   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:51.886928   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:51.886928   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:51.886928   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.886992   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:51.886992   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0624 05:50:51.886992   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0624 05:50:51.886992   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:51.886992   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:51.886992   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:51.886992   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:51.886992   14012 command_runner.go:130] > Events:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:51.886992   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:51.886992   14012 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:51.886992   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:51.886992   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:51.886992   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:51.886992   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:51.886992   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:51.886992   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:51.886992   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:51.886992   14012 command_runner.go:130] > Lease:
	I0624 05:50:51.886992   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:51.886992   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:51.886992   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:51.886992   14012 command_runner.go:130] > Conditions:
	I0624 05:50:51.887571   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:51.887571   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:51.887844   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:51.887844   14012 command_runner.go:130] > Addresses:
	I0624 05:50:51.887844   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:51.887844   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:51.887844   14012 command_runner.go:130] > Capacity:
	I0624 05:50:51.887844   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.887844   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.887844   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.887844   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.887844   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.888382   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:51.888459   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:51.888459   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:51.888459   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:51.888523   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:51.888557   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:51.888557   14012 command_runner.go:130] > System Info:
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:51.888603   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:51.888603   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:51.888603   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:51.888696   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:51.888696   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:51.888765   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:51.888809   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:51.888809   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:51.888809   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:51.888881   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:51.888881   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:51.888881   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:51.888881   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:51.889018   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0624 05:50:51.889018   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0624 05:50:51.889080   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:51.889122   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:51.889122   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:51.889122   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:51.889122   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:51.889242   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:51.889242   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:51.889287   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:51.889287   14012 command_runner.go:130] > Events:
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:51.889287   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  Starting                 5m35s                  kube-proxy       
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:51.889287   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.889828   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:51.889828   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m39s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  RegisteredNode           5m36s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:51.889917   14012 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:51.900471   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:51.900471   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:51.935016   14012 command_runner.go:130] > .:53
	I0624 05:50:51.935016   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:51.935016   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:51.935016   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:51.935016   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:51.935016   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:50:51.935016   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:50:51.964706   14012 command_runner.go:130] > .:53
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:51.964706   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:51.964706   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:50:51.964706   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:50:51.965288   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:50:51.965358   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:50:51.965452   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:50:51.965516   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:50:51.965552   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:50:51.965673   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:50:51.965698   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:50:51.965698   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:50:51.965760   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:50:51.965760   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:50:51.965790   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:50:51.965790   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:50:51.965827   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:50:51.968543   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:51.968543   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:51.995488   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:51.996162   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:51.996264   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:51.996333   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:51.996458   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:51.996530   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:51.996530   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:51.998850   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:51.998850   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:52.042210   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042247   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042342   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042423   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:52.042499   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042564   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042606   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042630   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042630   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042677   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042751   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042812   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042835   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042835   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.042860   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043393   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043457   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043554   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:52.043628   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:52.043746   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043814   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.043957   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044049   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044112   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044133   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044206   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:52.044274   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044361   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044392   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044392   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044443   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.044493   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045038   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045155   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:52.045690   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:52.045751   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045857   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045857   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.045922   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.045922   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.045970   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.045995   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046027   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046027   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046062   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:52.046062   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046092   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046092   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046150   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:52.046198   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046198   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046236   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046294   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046366   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046366   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046414   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046455   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:52.046547   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046547   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046572   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046609   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046625   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046625   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046651   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046756   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046756   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:52.046839   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046853   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046910   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:52.046934   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046934   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:52.046962   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047530   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047600   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:52.047600   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047647   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:52.047689   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047770   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047770   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047802   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.047874   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.047969   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048033   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:52.048071   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048110   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:52.048153   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048254   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048352   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048352   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048434   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048502   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048621   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048695   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048780   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.048823   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:52.048823   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.048871   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.048916   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.066982   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:52.067976   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:52.103673   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.103930   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:52.104099   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:52.104122   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:52.104198   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:52.104265   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:52.104331   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:52.104397   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.104531   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:52.104599   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:52.104705   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:52.104792   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:52.104897   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:52.104897   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:52.104933   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:52.104964   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:52.104964   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:52.105015   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:52.105108   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:52.105178   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:52.105247   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:52.105307   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:52.105347   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:52.105390   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:52.105390   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:52.105433   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:52.105522   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:52.105522   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:52.105588   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:52.105733   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:52.105733   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:52.105795   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.105795   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:52.105828   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:52.105828   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:52.105828   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.106373   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.106425   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:52.106425   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:52.106492   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:52.106540   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:52.106540   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:52.106585   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:52.106654   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:52.106733   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:52.106762   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:52.106829   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:52.106901   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:52.106957   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:52.106980   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:52.106980   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:52.107072   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:52.107103   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:52.107640   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:52.107708   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:52.107821   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:52.107886   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:52.107927   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:52.107927   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:52.107956   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:52.108507   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.108685   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.108716   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:52.108789   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.108855   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:52.108881   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:52.108881   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:52.108939   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:52.109025   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:52.109046   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:52.109119   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:52.109191   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:52.109250   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:52.109310   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:52.109368   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.128256   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:52.128256   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:52.193315   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:52.193315   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:52.193315   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:52.193315   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:52.193315   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:52.193315   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:52.193315   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:52.193315   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:52.193315   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:52.193844   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:52.193889   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:52.193952   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:52.194007   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:52.194069   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:52.194144   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:52.194144   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:52.194194   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:52.196600   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:50:52.196600   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:50:52.229502   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.230216   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:52.230324   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.230324   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:52.230387   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.230387   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:52.230510   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.232678   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:52.232743   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:52.265642   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.265642   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:52.265642   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.265880   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:52.265880   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:52.265880   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:52.265956   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.265956   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.266023   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:52.266041   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.266041   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.266041   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.266109   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.266176   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266250   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266287   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.266356   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.266356   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.266425   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.266425   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266492   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266492   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.266561   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.266622   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.266622   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.266702   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266702   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266788   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266788   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.266877   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.266948   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.267010   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.267010   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.267112   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267161   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267188   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267227   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267473   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:52.267569   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.267652   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:52.267652   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267726   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:52.267794   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267884   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:52.267918   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.267963   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.267963   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.268042   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:52.268042   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.268104   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:52.268129   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268234   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.268314   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.268338   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:52.268368   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.268408   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:52.272855   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:52.272903   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:52.272903   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:52.282341   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:52.282341   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:52.310703   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:52.310751   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:52.310795   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:52.310838   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:52.310838   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:52.310864   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:52.315988   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:50:52.315988   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:50:52.344643   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:50:52.344643   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:50:52.345508   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:50:52.345508   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345675   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345744   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:50:52.345744   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345815   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345849   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:52.345873   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345873   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:52.345929   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:52.348640   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:52.348640   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:52.377904   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:52.378901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:52.379901   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.380900   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.381901   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.382924   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.383900   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:52.384906   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:52.426900   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:50:52.426900   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:50:52.447923   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:50:52.447923   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:50:52.447923   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:50:52.448932   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:50:52.450901   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:50:52.450901   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:50:52.480899   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:50:52.480899   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:50:52.481184   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.481384   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:50:52.481453   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:52.481453   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:50:52.481526   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:50:52.481603   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:50:52.481603   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:50:52.483042   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:50:52.487824   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:50:52.487824   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:50:52.488816   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:50:52.488816   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:50:52.489822   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:50:52.496815   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:52.496815   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:52.525250   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:52.525312   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:52.525838   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:52.525982   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:52.526037   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:52.526066   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:52.526097   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:52.526126   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:52.526161   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:52.526161   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:52.526222   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:52.534097   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:50:52.534097   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:52.561675   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.562683   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:52.563676   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:52.564670   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:52.564670   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:52.565678   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:52.566693   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:50:52.567667   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:50:52.581683   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:52.581683   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.612934   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.613465   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.613465   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.613512   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.613512   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.613560   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.613622   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613622   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613665   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.613695   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614221   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614221   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614266   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614360   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614382   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.614471   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:52.614498   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:52.614576   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:52.614641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:52.614641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.615027   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615551   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615594   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615684   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615725   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615725   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615778   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:52.615842   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:52.615916   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:52.615992   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:52.616055   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:52.616116   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:52.616145   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.616210   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:52.616249   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:52.616249   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:52.616327   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:52.616327   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:52.616367   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:52.616424   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:52.616476   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:52.616476   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:52.616512   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:52.616512   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:52.616551   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:52.616551   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:52.616589   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:52.616648   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:52.616671   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.616698   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:52.617226   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:52.617226   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:52.617275   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:52.617353   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:52.617385   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:52.617385   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617447   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617984   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.617984   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618080   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618231   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618300   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618382   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:52.618382   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:52.618475   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:52.620288   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:52.620462   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:52.620494   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620536   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620717   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620809   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620809   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.620879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.620973   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621005   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621061   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621061   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621148   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621174   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621203   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621728   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621869   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621894   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621948   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.621948   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.621990   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622143   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622143   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622218   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622218   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622272   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.622272   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622347   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622372   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622419   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622436   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:50:52.622479   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.622507   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:52.623030   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:52.623166   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:52.623195   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:55.165524   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:50:55.195904   14012 command_runner.go:130] > 1846
	I0624 05:50:55.195904   14012 api_server.go:72] duration metric: took 1m6.8294375s to wait for apiserver process to appear ...
	I0624 05:50:55.195904   14012 api_server.go:88] waiting for apiserver healthz status ...
	I0624 05:50:55.206294   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:55.230775   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:55.231779   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:55.241709   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:55.267588   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:55.268429   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:55.278463   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:55.301966   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:55.302295   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:55.302295   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:55.312228   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:55.338292   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:55.338292   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:55.338292   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:55.348214   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:55.375100   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:55.375100   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:55.375100   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:55.386326   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:55.413476   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:55.413476   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:55.414654   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:55.424594   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:55.451675   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:55.451675   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:55.452023   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:55.452089   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:50:55.452163   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:50:55.481047   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:50:55.481624   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:50:55.481693   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:50:55.481736   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:50:55.481736   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:50:55.481773   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:50:55.481773   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:50:55.481773   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:50:55.481807   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:50:55.481867   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:50:55.481867   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:50:55.481867   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:50:55.481913   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:50:55.481986   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:50:55.482035   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:50:55.482035   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:50:55.482085   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:50:55.482085   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:50:55.482121   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:50:55.482121   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:50:55.482152   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:50:55.482152   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:50:55.484212   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:55.484303   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:55.513413   14012 command_runner.go:130] > .:53
	I0624 05:50:55.513478   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:55.513478   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:55.513541   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:55.513541   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:55.514343   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:55.514411   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:55.551536   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:55.552266   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:55.552330   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:55.552330   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:55.552398   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:55.552463   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:55.552463   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:55.552529   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.552529   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:55.552609   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:55.552676   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:55.552760   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:55.552808   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:55.552859   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:55.552859   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:55.559556   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:55.559613   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592099   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592259   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592259   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:55.592390   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:55.592477   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:55.592477   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592580   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:55.592700   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:55.592864   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.592984   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:55.592984   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:55.593097   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:55.593228   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:55.593342   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:55.593457   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:55.593547   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:55.593652   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:55.593764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:55.593764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:55.593872   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.593945   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594003   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:55.594098   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:55.594386   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:55.594536   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:55.594687   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:55.594827   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.594952   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:55.595146   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:55.595224   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:55.595287   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:55.595287   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:55.595331   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:55.595427   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:55.595594   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:55.595701   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:55.595701   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:55.595806   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.595806   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.595918   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.595918   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:55.596047   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:55.596164   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:55.596294   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:55.596406   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:55.596406   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:55.596524   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:55.596524   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:55.596634   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:55.596634   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:55.596747   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:55.596861   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:55.596861   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:55.596974   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:55.597085   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:55.597085   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:55.597197   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597197   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597299   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597410   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.597410   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597521   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597630   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597630   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:55.597733   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.597733   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.597843   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:55.597957   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:55.597957   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:55.598067   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.598067   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.598181   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:55.598181   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.598291   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.598291   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598403   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598403   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598515   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598571   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:55.598609   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:55.598701   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598768   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598859   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.598859   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:55.599048   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:55.599104   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.599162   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:55.599252   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:55.599303   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:55.599424   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:55.599462   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:55.599509   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:55.599566   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:55.599618   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.599721   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:55.600274   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:55.600371   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:55.600484   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.600537   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601079   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601130   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.601130   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601283   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.601827   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.601898   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.601898   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602018   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.602156   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.602213   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602292   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602408   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602477   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602649   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602705   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.602757   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.602861   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.603462   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604273   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.604841   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.604841   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:55.605116   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:55.605919   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:55.647121   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:55.647121   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:55.683233   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.683550   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:55.683550   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.683615   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.683722   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:55.683778   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.683820   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:55.683865   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:55.683887   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:55.684438   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:55.684438   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:55.684512   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:55.684607   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:55.684695   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:55.684719   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:55.684779   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:55.684852   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:55.684885   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:55.684911   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:55.684945   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:55.685002   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:55.685048   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:55.685048   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:55.685171   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:55.685283   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:55.685283   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.686479   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:55.686479   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:55.686554   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:55.686554   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:55.686595   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:55.686595   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:55.686595   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:55.687633   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:55.687690   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:55.687814   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.687902   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:55.687902   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:55.687964   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:55.688034   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:55.688115   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:55.688183   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:55.688261   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:55.688332   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:55.688399   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:55.688469   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:55.688550   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:55.688645   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.688703   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:55.688842   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:55.688972   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.689063   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.689186   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:55.689281   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:55.689371   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:55.689458   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:55.689458   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:55.689543   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:55.689627   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:55.689711   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.689817   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:55.689898   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:55.689898   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.690007   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:55.690091   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.710689   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:50:55.710725   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:50:55.749020   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:50:55.749824   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:50:55.749824   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:50:55.749990   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750143   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:50:55.750294   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750349   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750349   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750427   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750427   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750490   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.750581   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.750673   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.750673   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.752437   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:55.752437   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:55.783988   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:55.784416   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:55.784517   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:55.784517   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:55.784578   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:55.785170   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:55.785278   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:55.785378   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:55.785378   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:55.785452   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:55.785538   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:55.785538   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:55.785611   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:55.785611   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:55.785667   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:55.785667   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:55.785712   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:55.785760   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:55.785806   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:55.792160   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:50:55.792328   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:50:55.821056   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.821746   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:55.821850   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.821850   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:55.821942   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.824408   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:55.824476   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:55.860513   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.860513   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:55.861419   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.862214   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:55.862287   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.862287   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.862287   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.862357   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.862357   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862433   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862433   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.862494   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.862577   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.862641   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.862725   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.862725   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.862790   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862790   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862864   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862864   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.862940   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863025   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863025   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863097   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863187   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863187   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863231   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.863287   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.863287   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.863359   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:55.863481   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863481   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:55.863525   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863525   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:55.863602   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863631   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863631   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863680   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:55.863680   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.863738   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:55.863738   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863802   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863802   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863888   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.863888   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.863959   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:55.863959   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.864036   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:55.864085   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.864168   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:55.864168   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.864253   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:55.864316   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:55.864316   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:55.878075   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:50:55.878075   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:50:55.927396   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:55.927716   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:55.927716   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:55.928028   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:55.928095   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:55.928139   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:55.928165   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:55.928219   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:55.928265   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:55.928290   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:55.928405   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:55.928405   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:55.928676   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:55.928747   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:55.928747   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:55.928803   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:55.928803   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:55.928863   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:55.928863   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:55.929070   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:55.929559   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:55.929814   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:55.930021   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:55.930021   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:55.930088   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:55.930162   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:55.930234   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:55.930234   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.930446   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:55.930651   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.931678   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931836   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931836   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931897   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:55.931917   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:55.931967   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:55.931967   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:55.932186   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:55.932186   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:55.932186   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:55.933803   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:55.934780   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:55.934828   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:55.934872   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:55.934953   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:55.935017   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:55.935103   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:55.935176   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:55.935176   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:55.935223   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:55.935223   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:55.935262   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:55.935262   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:55.935320   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:55.935412   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:55.935450   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:55.935450   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:55.935483   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:55.935483   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:55.935525   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:55.935525   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:55.935561   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:55.935620   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.935620   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:55.935725   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:55.935725   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:55.935750   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:55.935813   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:55.935850   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:55.935850   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:55.935920   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:55.936000   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:55.936033   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:55.936069   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:55.936099   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:55.936724   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:55.936724   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:55.936812   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:55.936917   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:55.936966   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:55.937021   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:55.937021   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:55.937067   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937109   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:55.937156   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:55.937197   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937197   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:55.937242   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937242   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:55.937281   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:55.937324   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:55.937382   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:55.937382   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:55.937428   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:55.937468   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:55.937512   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:55.937512   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:55.937551   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:55.937594   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:55.937633   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:55.937633   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:55.937676   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:55.937715   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:55.937759   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:55.937798   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:55.937798   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:50:55.937933   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:50:55.957698   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:55.957698   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:55.989949   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:55.990666   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.991633   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992371   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992646   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:55.992791   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992861   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.992981   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993079   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:55.993167   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993268   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993351   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:55.993443   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993555   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993620   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993727   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.993868   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.993971   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994064   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:55.994165   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:55.994248   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:55.994329   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:55.994412   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994494   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994578   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994659   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994742   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994823   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994913   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.994995   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995077   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995159   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:55.995240   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:55.995328   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995410   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:55.995485   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995589   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995710   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:55.995836   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.995933   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996022   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996151   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996253   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996287   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996398   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996522   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996629   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996759   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996845   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.996929   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997061   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997190   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997237   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997265   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997373   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997466   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997549   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997635   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997719   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997805   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:55.997805   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997842   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997842   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.997873   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.997913   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.997941   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998013   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998097   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998180   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998218   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998218   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.998249   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.998813   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999018   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999018   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:55.999054   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:56.015895   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:50:56.015895   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:50:56.049247   14012 command_runner.go:130] > .:53
	I0624 05:50:56.049450   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:56.049450   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:56.049450   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:50:56.049516   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:50:56.049587   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:50:56.049635   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:50:56.049635   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:50:56.049682   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:50:56.049727   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:50:56.049798   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:50:56.049798   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:50:56.049849   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:50:56.049849   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:50:56.049885   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:50:56.049885   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:50:56.049935   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:50:56.049997   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:50:56.050025   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:50:56.050058   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:50:56.054740   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:50:56.054848   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:50:56.083943   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:50:56.083943   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:50:56.084947   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:56.085100   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:50:56.085165   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:56.085212   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:50:56.085256   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:50:56.085315   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:50:56.085315   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:50:56.085377   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085377   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:50:56.085415   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085415   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085453   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:50:56.085453   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:50:56.085489   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085489   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085545   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:50:56.085545   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085588   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:50:56.085588   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:50:56.085588   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:50:56.085649   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:50:56.085649   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:50:56.085649   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:50:56.085715   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:50:56.085715   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085756   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085756   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:50:56.085793   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085793   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085793   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:50:56.085793   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:50:56.085879   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085879   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085879   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.085938   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086031   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:50:56.086079   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086079   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086079   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:50:56.086159   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086159   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086214   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:50:56.086214   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:50:56.086258   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:50:56.086258   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:50:56.086305   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:50:56.086305   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:50:56.086368   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086368   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:50:56.086434   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:50:56.086434   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:56.086434   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:50:56.086515   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:50:56.086621   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:50:56.086669   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:50:56.086669   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:50:56.086704   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:50:56.086757   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:50:56.086808   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:50:56.086849   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:50:56.086884   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:50:56.086923   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:50:56.086923   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:50:56.086964   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:50:56.087002   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:50:56.087002   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:50:56.087067   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:50:56.087104   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:50:56.087104   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:50:56.087144   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:50:56.087144   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:50:56.096168   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:56.096168   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:56.125228   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:56.126053   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:56.126122   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:56.126122   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:56.126238   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:56.126377   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:56.126506   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:56.126506   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:56.128317   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:56.128414   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:56.161387   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161387   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.161532   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161623   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.161738   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161824   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.161918   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:56.162020   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:56.162133   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.162159   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:56.162209   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:56.162290   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:56.162373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162421   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162482   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162545   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162641   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.162728   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163274   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163274   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.163324   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163364   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163418   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163480   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163480   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163525   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163525   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163581   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163581   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163624   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163624   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163676   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:56.163676   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163717   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163717   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:56.163760   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:56.163820   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:56.163820   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:56.163917   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:56.163975   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.163975   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:56.164041   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:56.164111   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:56.164210   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:56.164210   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:56.164270   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:56.164270   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:56.164316   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:56.164385   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:56.164455   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:56.164526   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:56.164591   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:56.164676   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:56.164676   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:56.164720   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164755   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.164797   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164843   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.164843   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164891   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:56.164891   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164937   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164977   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.164977   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.165041   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:56.165041   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:56.165109   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:56.165109   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165155   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165737   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165737   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165792   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165889   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:56.165927   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:56.165961   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:56.166553   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:56.166553   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:56.166641   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:56.166775   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.166821   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166879   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166917   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.166981   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.166981   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167032   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167057   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167595   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.167715   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168275   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:50:56.168364   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168449   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168522   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168522   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168603   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168672   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168726   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168726   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168773   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168773   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:56.168850   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:50:56.168850   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.168902   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.168902   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.168971   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.169008   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:56.169033   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:56.169033   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.169122   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:56.198252   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:56.198252   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:56.269890   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:56.270158   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:56.270158   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:56.270158   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:56.270281   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:56.270326   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:56.270326   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:56.270326   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:56.270415   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:56.270415   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:56.270500   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:56.270500   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:56.270540   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:56.270540   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:56.270540   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:56.270629   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:56.270629   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:56.273020   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:56.273020   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:56.492041   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:56.492107   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:56.492107   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.492107   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.492107   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.492174   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:56.492215   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:56.492282   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.492282   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.492282   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.492330   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:56.492330   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:56.492330   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.492389   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.492389   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:56.492389   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.492435   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:55 +0000
	I0624 05:50:56.492435   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.492435   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:56.492485   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:56.492485   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:56.492539   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:56.492539   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:56.492588   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:56.492588   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.492635   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:56.492635   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:56.492635   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.492635   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.492684   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.492684   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.492684   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.492684   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.492684   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.492730   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.492730   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.492771   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.492771   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.492771   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.492771   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.492771   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:56.492771   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:56.492817   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:56.492909   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.492909   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.492970   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.493015   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:56.493015   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:56.493015   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:56.493015   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.493098   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.493098   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:56.493098   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0624 05:50:56.493168   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:56.493230   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:56.493310   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.493310   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.493310   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:56.493310   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:56.493354   14012 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:56.493354   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:56.493408   14012 command_runner.go:130] > Events:
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:56.493408   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0624 05:50:56.493408   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:56.493471   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:56.493527   14012 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:56.493575   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:56.493642   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:56.493642   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.493642   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.493642   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:56.493642   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:56.493642   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:56.493642   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.493642   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.493642   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:56.493642   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:56.493642   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.493642   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:56.493642   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:56.493642   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.493642   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.493642   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.493642   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.493642   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.493642   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.493642   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.494182   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.494182   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.494182   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:56.494182   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:56.494247   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:56.494247   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.494247   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.494365   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.494406   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.494406   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:56.494406   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:56.494479   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:56.494479   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.494525   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.494525   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:56.494565   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0624 05:50:56.494565   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0624 05:50:56.494609   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.494609   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.494609   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:56.494609   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:56.494663   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:56.494663   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:56.494663   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:56.494709   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:56.494709   14012 command_runner.go:130] > Events:
	I0624 05:50:56.494709   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:56.494709   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:56.494764   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.494809   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:56.494809   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.494857   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:56.494857   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:56.494901   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:56.494901   14012 command_runner.go:130] >   Normal  NodeNotReady             21s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:56.494952   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:56.494952   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:56.494952   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:56.494952   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:56.495002   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:56.495055   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:56.495097   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:56.495097   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:56.495097   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:56.495097   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:56.495169   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:56.495169   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:56.495169   14012 command_runner.go:130] > Lease:
	I0624 05:50:56.495169   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:56.495169   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:56.495169   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:56.495242   14012 command_runner.go:130] > Conditions:
	I0624 05:50:56.495242   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:56.495242   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:56.495304   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495304   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:56.495492   14012 command_runner.go:130] > Addresses:
	I0624 05:50:56.495492   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:56.495492   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:56.495492   14012 command_runner.go:130] > Capacity:
	I0624 05:50:56.495530   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.495530   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.495530   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.495530   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.495530   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.495530   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:56.495593   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:56.495593   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:56.495593   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:56.495593   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:56.495593   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:56.495593   14012 command_runner.go:130] > System Info:
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:56.495652   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:56.495652   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:56.495652   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:56.495714   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:56.495714   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:56.495774   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:56.495774   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:56.495845   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:56.495845   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:56.495905   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0624 05:50:56.495905   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0624 05:50:56.495905   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:56.495905   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:56.495905   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:56.495905   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:56.495905   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:56.495905   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:56.495985   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:56.495985   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:56.495985   14012 command_runner.go:130] > Events:
	I0624 05:50:56.495985   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:56.495985   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  Starting                 5m40s                  kube-proxy       
	I0624 05:50:56.496046   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.496110   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:56.496171   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:56.496245   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:56.496303   14012 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:56.496303   14012 command_runner.go:130] >   Normal  NodeReady                5m36s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:56.496344   14012 command_runner.go:130] >   Normal  NodeNotReady             4m                     node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:56.496344   14012 command_runner.go:130] >   Normal  RegisteredNode           61s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.010391   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:50:59.019178   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:50:59.019178   14012 round_trippers.go:463] GET https://172.31.217.139:8443/version
	I0624 05:50:59.019178   14012 round_trippers.go:469] Request Headers:
	I0624 05:50:59.019178   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:50:59.019178   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:50:59.021643   14012 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0624 05:50:59.021643   14012 round_trippers.go:577] Response Headers:
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:50:59.021643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:50:59.021643   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Content-Length: 263
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:50:59 GMT
	I0624 05:50:59.021643   14012 round_trippers.go:580]     Audit-Id: a34bdbe4-d317-4e0e-988d-97dd2edb80de
	I0624 05:50:59.021643   14012 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0624 05:50:59.021643   14012 api_server.go:141] control plane version: v1.30.2
	I0624 05:50:59.021643   14012 api_server.go:131] duration metric: took 3.8257243s to wait for apiserver health ...
	I0624 05:50:59.021643   14012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0624 05:50:59.032181   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0624 05:50:59.062863   14012 command_runner.go:130] > d02d42ecc648
	I0624 05:50:59.062935   14012 logs.go:276] 1 containers: [d02d42ecc648]
	I0624 05:50:59.073166   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0624 05:50:59.098295   14012 command_runner.go:130] > 7154c31f4e65
	I0624 05:50:59.098295   14012 logs.go:276] 1 containers: [7154c31f4e65]
	I0624 05:50:59.112486   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0624 05:50:59.136316   14012 command_runner.go:130] > b74d3be4b134
	I0624 05:50:59.136316   14012 command_runner.go:130] > f46bdc12472e
	I0624 05:50:59.136316   14012 logs.go:276] 2 containers: [b74d3be4b134 f46bdc12472e]
	I0624 05:50:59.145312   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0624 05:50:59.170374   14012 command_runner.go:130] > 92813c7375dd
	I0624 05:50:59.170374   14012 command_runner.go:130] > d7d8d18e1b11
	I0624 05:50:59.170374   14012 logs.go:276] 2 containers: [92813c7375dd d7d8d18e1b11]
	I0624 05:50:59.179748   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0624 05:50:59.211023   14012 command_runner.go:130] > d7311e3316b7
	I0624 05:50:59.211023   14012 command_runner.go:130] > b0dd966ee710
	I0624 05:50:59.211106   14012 logs.go:276] 2 containers: [d7311e3316b7 b0dd966ee710]
	I0624 05:50:59.220417   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0624 05:50:59.247808   14012 command_runner.go:130] > 39d593f24d2b
	I0624 05:50:59.247847   14012 command_runner.go:130] > 7174bdea66e2
	I0624 05:50:59.247847   14012 logs.go:276] 2 containers: [39d593f24d2b 7174bdea66e2]
	I0624 05:50:59.256586   14012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0624 05:50:59.280125   14012 command_runner.go:130] > 404cdbe8e049
	I0624 05:50:59.280125   14012 command_runner.go:130] > f74eb1beb274
	I0624 05:50:59.280125   14012 logs.go:276] 2 containers: [404cdbe8e049 f74eb1beb274]
	I0624 05:50:59.280125   14012 logs.go:123] Gathering logs for kube-controller-manager [7174bdea66e2] ...
	I0624 05:50:59.280125   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7174bdea66e2"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.206441       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.628587       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.630826       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.632648       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:22.633969       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.693781       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.693896       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715421       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715908       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.715925       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726253       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726372       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726594       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.726774       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.745986       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746595       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.746147       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.768949       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.769101       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.769864       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.770242       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.784592       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.785204       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.785305       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:50:59.308909   14012 command_runner.go:130] ! I0624 12:26:26.794616       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:50:59.309443   14012 command_runner.go:130] ! I0624 12:26:26.800916       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:50:59.309443   14012 command_runner.go:130] ! I0624 12:26:26.801276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.801477       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.814846       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:50:59.309488   14012 command_runner.go:130] ! I0624 12:26:26.815072       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.815297       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849021       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849588       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.849897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:50:59.309574   14012 command_runner.go:130] ! I0624 12:26:26.874141       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:26.874489       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:26.874607       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:27.013046       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:50:59.309660   14012 command_runner.go:130] ! I0624 12:26:27.013473       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:50:59.309735   14012 command_runner.go:130] ! I0624 12:26:27.013734       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:50:59.309777   14012 command_runner.go:130] ! I0624 12:26:27.014094       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:50:59.309777   14012 command_runner.go:130] ! I0624 12:26:27.014288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.014475       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.014695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:50:59.309841   14012 command_runner.go:130] ! I0624 12:26:27.015128       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:50:59.309922   14012 command_runner.go:130] ! I0624 12:26:27.015300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:50:59.309922   14012 command_runner.go:130] ! I0624 12:26:27.015522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:50:59.309983   14012 command_runner.go:130] ! I0624 12:26:27.015862       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:50:59.309983   14012 command_runner.go:130] ! W0624 12:26:27.016135       1 shared_informer.go:597] resyncPeriod 13h45m44.075159301s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:59.309983   14012 command_runner.go:130] ! I0624 12:26:27.016395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.016607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.016880       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017078       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017477       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.017909       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! W0624 12:26:27.018148       1 shared_informer.go:597] resyncPeriod 12h19m38.569038613s is smaller than resyncCheckPeriod 23h36m51.778396022s and the informer has already started. Changing it to 23h36m51.778396022s
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.018399       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.018912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:50:59.310045   14012 command_runner.go:130] ! I0624 12:26:27.019309       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.019529       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.021358       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.021200       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260578       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260613       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.260675       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.447952       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448019       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448090       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.448103       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:50:59.310512   14012 command_runner.go:130] ! E0624 12:26:27.603453       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.604006       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752362       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752462       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752517       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.752754       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.915839       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.916646       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:27.916970       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.053450       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.053489       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.054837       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.055235       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.203694       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.203976       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204245       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204412       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.204552       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372076       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372623       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.372960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:50:59.310512   14012 command_runner.go:130] ! E0624 12:26:28.402024       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.402050       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.556374       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.556509       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.558503       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705440       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705561       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.705581       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855404       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855676       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:28.855735       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.003880       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.004493       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.004735       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.152413       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.152574       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.302394       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.302468       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.303031       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:50:59.310512   14012 command_runner.go:130] ! I0624 12:26:29.453371       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.456862       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.456879       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.648525       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.648617       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705166       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705258       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705293       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.705326       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.853878       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.854364       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:29.854558       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.005972       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.006011       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.006417       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154210       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154401       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.154436       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198297       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198423       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198536       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.198556       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.248989       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249019       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249035       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249606       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249649       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.249664       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250126       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250170       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250193       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.250896       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251055       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:30.251640       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.311848       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.311975       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.312143       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.312179       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324219       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324706       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.324869       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345373       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345770       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.345838       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371279       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371633       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.371653       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.373875       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393197       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393715       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.393840       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.413450       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.413710       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:50:59.311491   14012 command_runner.go:130] ! I0624 12:26:40.415319       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.457885       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460359       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460497       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.460990       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.462766       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.472473       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.474859       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.486971       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.494371       1 shared_informer.go:320] Caches are synced for job
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.498664       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.501248       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.502263       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.503419       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.505659       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.505993       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.506519       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.506983       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512647       1 shared_informer.go:320] Caches are synced for node
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512777       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512914       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.512982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.513010       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.518736       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.518858       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.526899       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.526911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.536214       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600" podCIDRs=["10.244.0.0/24"]
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.547914       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.548259       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551681       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551943       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551950       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.551956       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.557672       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.557845       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.558157       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.558166       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.561611       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.573979       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.604966       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605143       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.605176       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.615875       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.617981       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.662594       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.723163       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:40.749099       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.130412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="529.154397ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.173935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.174691       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.192281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.116161ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.197286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.202µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.213971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.254421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.801µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.961982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.897922ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.981574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.206589ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:41.988779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.001µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:51.872165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.901µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:51.924520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.2µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:54.091110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.523302ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:54.101593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.399µs"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:26:55.608512       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 05:50:59.312481   14012 command_runner.go:130] ! I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 05:50:59.313480   14012 command_runner.go:130] ! I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:50:59.330479   14012 logs.go:123] Gathering logs for kindnet [f74eb1beb274] ...
	I0624 05:50:59.331476   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f74eb1beb274"
	I0624 05:50:59.368509   14012 command_runner.go:130] ! I0624 12:36:10.612193       1 main.go:227] handling current node
	I0624 05:50:59.368509   14012 command_runner.go:130] ! I0624 12:36:10.612208       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612214       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612896       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:10.612960       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622237       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622405       1 main.go:227] handling current node
	I0624 05:50:59.368974   14012 command_runner.go:130] ! I0624 12:36:20.622423       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369160   14012 command_runner.go:130] ! I0624 12:36:20.622432       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:20.623046       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:20.623151       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369235   14012 command_runner.go:130] ! I0624 12:36:30.630467       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630526       1 main.go:227] handling current node
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630540       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.630545       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369301   14012 command_runner.go:130] ! I0624 12:36:30.631179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:30.631316       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640240       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640347       1 main.go:227] handling current node
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640364       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369368   14012 command_runner.go:130] ! I0624 12:36:40.640371       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369446   14012 command_runner.go:130] ! I0624 12:36:40.640987       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369446   14012 command_runner.go:130] ! I0624 12:36:40.641099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369489   14012 command_runner.go:130] ! I0624 12:36:50.648764       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369489   14012 command_runner.go:130] ! I0624 12:36:50.648918       1 main.go:227] handling current node
	I0624 05:50:59.369530   14012 command_runner.go:130] ! I0624 12:36:50.648934       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369530   14012 command_runner.go:130] ! I0624 12:36:50.648942       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369557   14012 command_runner.go:130] ! I0624 12:36:50.649560       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:36:50.649639       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:37:00.665115       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369581   14012 command_runner.go:130] ! I0624 12:37:00.665211       1 main.go:227] handling current node
	I0624 05:50:59.369641   14012 command_runner.go:130] ! I0624 12:37:00.665243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369641   14012 command_runner.go:130] ! I0624 12:37:00.665250       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:00.665973       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:00.666297       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673125       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673214       1 main.go:227] handling current node
	I0624 05:50:59.369719   14012 command_runner.go:130] ! I0624 12:37:10.673231       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.673239       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.673863       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:10.674072       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688502       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688708       1 main.go:227] handling current node
	I0624 05:50:59.369785   14012 command_runner.go:130] ! I0624 12:37:20.688783       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369882   14012 command_runner.go:130] ! I0624 12:37:20.688887       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:20.689097       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:20.689185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695333       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695559       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695618       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695833       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:30.695991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712366       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712477       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712492       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.712499       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.713191       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:40.713340       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720063       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720239       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720253       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720260       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720369       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:37:50.720377       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.737636       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.737947       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738025       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738109       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738358       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:00.738456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753061       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753387       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753595       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753768       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.753992       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:10.754030       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765543       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765574       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765596       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.765955       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:20.766045       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779589       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779692       1 main.go:227] handling current node
	I0624 05:50:59.369908   14012 command_runner.go:130] ! I0624 12:38:30.779707       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370444   14012 command_runner.go:130] ! I0624 12:38:30.779714       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370444   14012 command_runner.go:130] ! I0624 12:38:30.780050       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:30.780160       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:40.789320       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370487   14012 command_runner.go:130] ! I0624 12:38:40.789490       1 main.go:227] handling current node
	I0624 05:50:59.370539   14012 command_runner.go:130] ! I0624 12:38:40.789524       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370539   14012 command_runner.go:130] ! I0624 12:38:40.789546       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370608   14012 command_runner.go:130] ! I0624 12:38:40.789682       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370608   14012 command_runner.go:130] ! I0624 12:38:40.789744       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801399       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801467       1 main.go:227] handling current node
	I0624 05:50:59.370664   14012 command_runner.go:130] ! I0624 12:38:50.801481       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.801487       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.802193       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:38:50.802321       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.814735       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.815272       1 main.go:227] handling current node
	I0624 05:50:59.370720   14012 command_runner.go:130] ! I0624 12:39:00.815427       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370796   14012 command_runner.go:130] ! I0624 12:39:00.815439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370824   14012 command_runner.go:130] ! I0624 12:39:00.815986       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:00.816109       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.831199       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.832526       1 main.go:227] handling current node
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.832856       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.833188       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.370842   14012 command_runner.go:130] ! I0624 12:39:10.838555       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:10.838865       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847914       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847939       1 main.go:227] handling current node
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847951       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.370928   14012 command_runner.go:130] ! I0624 12:39:20.847957       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371021   14012 command_runner.go:130] ! I0624 12:39:20.848392       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371021   14012 command_runner.go:130] ! I0624 12:39:20.848423       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860714       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860767       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860779       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.860785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.861283       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:30.861379       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868293       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868398       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868413       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868420       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868543       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:40.868722       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880221       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880373       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880392       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880402       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880912       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:39:50.880991       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897121       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897564       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897651       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.897749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.898213       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:00.898295       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913136       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913233       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913264       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913271       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.913869       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:10.914021       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922013       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922147       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922162       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922169       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922635       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:20.922743       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.937756       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.937901       1 main.go:227] handling current node
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938461       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938594       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371052   14012 command_runner.go:130] ! I0624 12:40:30.938929       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:30.939016       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:40.946205       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371589   14012 command_runner.go:130] ! I0624 12:40:40.946231       1 main.go:227] handling current node
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946243       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946249       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946713       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:40.946929       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371640   14012 command_runner.go:130] ! I0624 12:40:50.962243       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371707   14012 command_runner.go:130] ! I0624 12:40:50.962553       1 main.go:227] handling current node
	I0624 05:50:59.371707   14012 command_runner.go:130] ! I0624 12:40:50.963039       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371754   14012 command_runner.go:130] ! I0624 12:40:50.963516       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371754   14012 command_runner.go:130] ! I0624 12:40:50.963690       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:40:50.963770       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971339       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971449       1 main.go:227] handling current node
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971465       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371797   14012 command_runner.go:130] ! I0624 12:41:00.971475       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:00.971593       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:00.971692       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371863   14012 command_runner.go:130] ! I0624 12:41:10.980422       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371927   14012 command_runner.go:130] ! I0624 12:41:10.980533       1 main.go:227] handling current node
	I0624 05:50:59.371927   14012 command_runner.go:130] ! I0624 12:41:10.980547       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371953   14012 command_runner.go:130] ! I0624 12:41:10.980554       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:10.981184       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:10.981291       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994548       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994671       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994702       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.994749       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.995257       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:20.995359       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002456       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002501       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002513       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002518       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002691       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:31.002704       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013190       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013298       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013315       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013323       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:41.013826       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027455       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027677       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027693       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.027702       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.028237       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:41:51.028303       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043352       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043467       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043487       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043497       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.043979       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:01.044071       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061262       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061292       1 main.go:227] handling current node
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061304       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.371979   14012 command_runner.go:130] ! I0624 12:42:11.061313       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:11.061445       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:11.061454       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079500       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079684       1 main.go:227] handling current node
	I0624 05:50:59.372505   14012 command_runner.go:130] ! I0624 12:42:21.079722       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.079747       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.080033       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:21.080122       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:31.086695       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372568   14012 command_runner.go:130] ! I0624 12:42:31.086877       1 main.go:227] handling current node
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.086897       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.086906       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.087071       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:31.087086       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101071       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101114       1 main.go:227] handling current node
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101129       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372625   14012 command_runner.go:130] ! I0624 12:42:41.101136       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:41.101426       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:41.101443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109343       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109446       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109482       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109491       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109637       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:42:51.109671       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125261       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125579       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125601       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125613       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.125881       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:01.126025       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137392       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137565       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137599       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137624       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137836       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:11.137880       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.151981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152027       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152041       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152048       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152174       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:21.152187       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158435       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158545       1 main.go:227] handling current node
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158561       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158568       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.158761       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:31.159003       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.372715   14012 command_runner.go:130] ! I0624 12:43:41.170607       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373245   14012 command_runner.go:130] ! I0624 12:43:41.170761       1 main.go:227] handling current node
	I0624 05:50:59.373245   14012 command_runner.go:130] ! I0624 12:43:41.170777       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.170785       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.170958       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:41.171046       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:51.177781       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373306   14012 command_runner.go:130] ! I0624 12:43:51.178299       1 main.go:227] handling current node
	I0624 05:50:59.373401   14012 command_runner.go:130] ! I0624 12:43:51.178313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178461       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:43:51.178490       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187449       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187627       1 main.go:227] handling current node
	I0624 05:50:59.373421   14012 command_runner.go:130] ! I0624 12:44:01.187661       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.187685       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.188037       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:01.188176       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373509   14012 command_runner.go:130] ! I0624 12:44:11.202762       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373573   14012 command_runner.go:130] ! I0624 12:44:11.202916       1 main.go:227] handling current node
	I0624 05:50:59.373573   14012 command_runner.go:130] ! I0624 12:44:11.202931       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373598   14012 command_runner.go:130] ! I0624 12:44:11.202938       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:11.203384       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:11.203472       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210306       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210393       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210432       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.210439       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.211179       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:21.211208       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.224996       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225111       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225126       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225134       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225411       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:31.225443       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.231748       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232298       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232320       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232330       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232589       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:41.232714       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.247960       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248042       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248057       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248064       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248602       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:44:51.248687       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254599       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254726       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254880       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.254967       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.255102       1 main.go:223] Handling node with IPs: map[172.31.215.226:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:01.255130       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.2.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266678       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266897       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266913       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:11.266968       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.281856       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.281988       1 main.go:227] handling current node
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282122       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282152       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282517       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.373627   14012 command_runner.go:130] ! I0624 12:45:21.282918       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:21.283334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:31.290754       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374174   14012 command_runner.go:130] ! I0624 12:45:31.290937       1 main.go:227] handling current node
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.290955       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.290963       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.291391       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:31.291497       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:41.302532       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374239   14012 command_runner.go:130] ! I0624 12:45:41.302559       1 main.go:227] handling current node
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.302571       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.302577       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.303116       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374354   14012 command_runner.go:130] ! I0624 12:45:41.303150       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314492       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314609       1 main.go:227] handling current node
	I0624 05:50:59.374442   14012 command_runner.go:130] ! I0624 12:45:51.314625       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.314634       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.315042       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374479   14012 command_runner.go:130] ! I0624 12:45:51.315144       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374519   14012 command_runner.go:130] ! I0624 12:46:01.330981       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374519   14012 command_runner.go:130] ! I0624 12:46:01.331091       1 main.go:227] handling current node
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331108       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331118       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374562   14012 command_runner.go:130] ! I0624 12:46:01.331615       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:01.331632       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347377       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347492       1 main.go:227] handling current node
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347507       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374633   14012 command_runner.go:130] ! I0624 12:46:11.347515       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:11.347627       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:11.347658       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:21.353876       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374708   14012 command_runner.go:130] ! I0624 12:46:21.354017       1 main.go:227] handling current node
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354037       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354047       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374777   14012 command_runner.go:130] ! I0624 12:46:21.354409       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374818   14012 command_runner.go:130] ! I0624 12:46:21.354507       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374818   14012 command_runner.go:130] ! I0624 12:46:31.360620       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374854   14012 command_runner.go:130] ! I0624 12:46:31.360713       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.360729       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.360736       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.361471       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:50:59.374885   14012 command_runner.go:130] ! I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:50:59.393432   14012 logs.go:123] Gathering logs for coredns [b74d3be4b134] ...
	I0624 05:50:59.393432   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74d3be4b134"
	I0624 05:50:59.425793   14012 command_runner.go:130] > .:53
	I0624 05:50:59.425793   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:50:59.425793   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:50:59.425793   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:50:59.425793   14012 command_runner.go:130] > [INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	I0624 05:50:59.425793   14012 logs.go:123] Gathering logs for kube-proxy [b0dd966ee710] ...
	I0624 05:50:59.425793   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0dd966ee710"
	I0624 05:50:59.458148   14012 command_runner.go:130] ! I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:59.458797   14012 command_runner.go:130] ! I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:59.458871   14012 command_runner.go:130] ! I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:59.460485   14012 logs.go:123] Gathering logs for etcd [7154c31f4e65] ...
	I0624 05:50:59.460485   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7154c31f4e65"
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.800127Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801686Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.31.217.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.31.217.139:2380","--initial-cluster=multinode-876600=https://172.31.217.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.31.217.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.31.217.139:2380","--name=multinode-876600","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.801904Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.802043Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802055Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.31.217.139:2380"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.802173Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.813683Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.817166Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-876600","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.858508Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"38.762891ms"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.889653Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908065Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","commit-index":2025}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=()"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.90855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became follower at term 2"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.908564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5aae37eb5b537b7 [peers: [], term: 2, commit: 2025, applied: 0, lastindex: 2025, lastterm: 2]"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"warn","ts":"2024-06-24T12:49:39.923675Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.929194Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1365}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.935469Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1750}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.950086Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.96537Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5aae37eb5b537b7","timeout":"7s"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966135Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5aae37eb5b537b7"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.966969Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"e5aae37eb5b537b7","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 switched to configuration voters=(16549289914080245687)"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.968886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","added-peer-id":"e5aae37eb5b537b7","added-peer-peer-urls":["https://172.31.211.219:2380"]}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	I0624 05:50:59.490474   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0624 05:50:59.491453   14012 command_runner.go:130] ! {"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	I0624 05:50:59.497453   14012 logs.go:123] Gathering logs for kube-proxy [d7311e3316b7] ...
	I0624 05:50:59.497453   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7311e3316b7"
	I0624 05:50:59.524454   14012 command_runner.go:130] ! I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 05:50:59.525104   14012 command_runner.go:130] ! I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 05:50:59.525158   14012 command_runner.go:130] ! I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	I0624 05:50:59.527524   14012 logs.go:123] Gathering logs for container status ...
	I0624 05:50:59.527524   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0624 05:50:59.592602   14012 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0624 05:50:59.592602   14012 command_runner.go:130] > 30f4b1b02a0ba       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	I0624 05:50:59.592602   14012 command_runner.go:130] > b74d3be4b134f       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:59.592602   14012 command_runner.go:130] > 804c0aa053890       6e38f40d628db                                                                                         29 seconds ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	I0624 05:50:59.592602   14012 command_runner.go:130] > 404cdbe8e049d       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	I0624 05:50:59.592602   14012 command_runner.go:130] > 30fc6635cecf9       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	I0624 05:50:59.592602   14012 command_runner.go:130] > d7311e3316b77       53c535741fb44                                                                                         About a minute ago   Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	I0624 05:50:59.592602   14012 command_runner.go:130] > 7154c31f4e659       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > d02d42ecc648a       56ce0fd9fb532                                                                                         About a minute ago   Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > 92813c7375dd7       7820c83aa1394                                                                                         About a minute ago   Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > 39d593f24d2b3       e874818b3caac                                                                                         About a minute ago   Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	I0624 05:50:59.592602   14012 command_runner.go:130] > f46bdc12472e4       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	I0624 05:50:59.592602   14012 command_runner.go:130] > f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	I0624 05:50:59.592602   14012 command_runner.go:130] > b0dd966ee710f       53c535741fb44                                                                                         24 minutes ago       Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	I0624 05:50:59.592602   14012 command_runner.go:130] > 7174bdea66e24       e874818b3caac                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	I0624 05:50:59.592602   14012 command_runner.go:130] > d7d8d18e1b115       7820c83aa1394                                                                                         24 minutes ago       Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	I0624 05:50:59.595598   14012 logs.go:123] Gathering logs for kubelet ...
	I0624 05:50:59.595598   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811365    1380 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.811680    1380 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: I0624 12:49:33.812614    1380 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 kubelet[1380]: E0624 12:49:33.814151    1380 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:59.626768   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:33 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538431    1430 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.538816    1430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: I0624 12:49:34.539226    1430 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 kubelet[1430]: E0624 12:49:34.539327    1430 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:34 multinode-876600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:35 multinode-876600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709357    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.2"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.709893    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.710380    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.713689    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.727908    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.749852    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.750150    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754322    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754383    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-876600","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754779    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754793    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.754845    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760643    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760689    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.760717    1517 kubelet.go:312] "Adding apiserver pod source"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.761552    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.765675    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.769504    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.770333    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.771499    1517 server.go:1264] "Started kubelet"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.773146    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.773260    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.776757    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.777028    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.777249    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.779043    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.780454    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.785286    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0624 05:50:59.627764   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.787808    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.787397    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.31.217.139:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-876600.17dbf1a5f01055d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-876600,UID:multinode-876600,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-876600,},FirstTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,LastTimestamp:2024-06-24 12:49:37.771476434 +0000 UTC m=+0.158435193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-8
76600,}"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.795745    1517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-876600\" not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795790    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.795859    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.811876    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="200ms"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.812137    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.812240    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.816923    1517 factory.go:221] Registration of the systemd container factory successfully
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817116    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.817180    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.849272    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858618    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858649    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.858679    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859232    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859338    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.859374    1517 policy_none.go:49] "None policy: Start"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.874552    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883737    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.883887    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.884061    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.884450    1517 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: W0624 12:49:37.891255    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.891809    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.897656    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.899333    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.908621    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.909440    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.910768    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.911242    1517 state_mem.go:75] "Updated machine memory state"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.917629    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.918054    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: E0624 12:49:37.922689    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-876600\" not found"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.926295    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0624 05:50:59.628775   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.984694    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3038ef4054f2a74be3ac6770afa89a1a" podNamespace="kube-system" podName="kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.987298    1517 topology_manager.go:215] "Topology Admit Handler" podUID="a20f51e7dce32bda1f77fbfb30315284" podNamespace="kube-system" podName="kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.988967    1517 topology_manager.go:215] "Topology Admit Handler" podUID="50c7b7ba99620272d80c509bd4d93e67" podNamespace="kube-system" podName="kube-scheduler-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.990334    1517 topology_manager.go:215] "Topology Admit Handler" podUID="3fd3eb9408db2ef91e6f7d911ed85123" podNamespace="kube-system" podName="etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991281    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf1b076e912f20eaae1749c347bcc5e83b8124ba897ecb37ef8371b1db967ce"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991471    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d072caca0861002474304db2229c6b3e30666c2f41c71c16a495df204fe36f2f"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991572    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 kubelet[1517]: I0624 12:49:37.991586    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2af473df8adb23fc56dd617315ded0d05a5653d49003c8ca129ab05e908e52"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.001270    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449d7721b5b2bbf32870edad44c4c26f32f4524da356254981d19bb0058ca10"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.013521    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="400ms"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.018705    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f89e0f2608fef982bbf644221f8bcf194e532ace888fb0f11c4e6a336a864f7"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.032476    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6184b2eb79fd80be4d9dfbf5ed7eba56faa80bf8faa268522d65c3465e07eb49"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055386    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-ca-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055439    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-flexvolume-dir\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055470    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-k8s-certs\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055492    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-data\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055530    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-k8s-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055549    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055586    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055612    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c7b7ba99620272d80c509bd4d93e67-kubeconfig\") pod \"kube-scheduler-multinode-876600\" (UID: \"50c7b7ba99620272d80c509bd4d93e67\") " pod="kube-system/kube-scheduler-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055631    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3fd3eb9408db2ef91e6f7d911ed85123-etcd-certs\") pod \"etcd-multinode-876600\" (UID: \"3fd3eb9408db2ef91e6f7d911ed85123\") " pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055702    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3038ef4054f2a74be3ac6770afa89a1a-ca-certs\") pod \"kube-apiserver-multinode-876600\" (UID: \"3038ef4054f2a74be3ac6770afa89a1a\") " pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.055774    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20f51e7dce32bda1f77fbfb30315284-kubeconfig\") pod \"kube-controller-manager-multinode-876600\" (UID: \"a20f51e7dce32bda1f77fbfb30315284\") " pod="kube-system/kube-controller-manager-multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.058834    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c3ec125c93c5fca057938d122ca0534a2fe148d252be371f8c4606584f5f7"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.077789    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.101443    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.629759   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.102907    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.415249    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="800ms"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: I0624 12:49:38.505446    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.506697    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.624819    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.625024    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: W0624 12:49:38.744275    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 kubelet[1517]: E0624 12:49:38.744349    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.124419    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.141338    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.155177    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.155254    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: W0624 12:49:39.187826    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.187925    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-876600&limit=500&resourceVersion=0": dial tcp 172.31.217.139:8443: connect: connection refused
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.216921    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-876600?timeout=10s\": dial tcp 172.31.217.139:8443: connect: connection refused" interval="1.6s"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: I0624 12:49:39.308797    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 kubelet[1517]: E0624 12:49:39.310065    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.31.217.139:8443: connect: connection refused" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:40 multinode-876600 kubelet[1517]: I0624 12:49:40.911597    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.298854    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.299060    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-876600"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.301304    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.302138    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.303325    1517 setters.go:580] "Node became not ready" node="multinode-876600" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-24T12:49:43Z","lastTransitionTime":"2024-06-24T12:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.779243    1517 apiserver.go:52] "Watching apiserver"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.787310    1517 topology_manager.go:215] "Topology Admit Handler" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sq7g6"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788207    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-876600" podUID="52a7f191-9dd7-4dcd-8e9e-d05deeac2349"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.788355    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788544    1517 topology_manager.go:215] "Topology Admit Handler" podUID="0529046f-d42a-4351-9b49-2572866afd47" podNamespace="kube-system" podName="kindnet-x7zb4"
	I0624 05:50:59.630772   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.788784    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789177    1517 topology_manager.go:215] "Topology Admit Handler" podUID="038c238e-3e2b-4d31-a68c-64bf29863d8f" podNamespace="kube-system" podName="kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789395    1517 topology_manager.go:215] "Topology Admit Handler" podUID="056be0f2-af5c-427e-961b-a9101f3186d8" podNamespace="kube-system" podName="storage-provisioner"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.789535    1517 topology_manager.go:215] "Topology Admit Handler" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793" podNamespace="default" podName="busybox-fc5497c4f-ddhfw"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.789835    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.796635    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825335    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-cni-cfg\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825393    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-xtables-lock\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825435    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/056be0f2-af5c-427e-961b-a9101f3186d8-tmp\") pod \"storage-provisioner\" (UID: \"056be0f2-af5c-427e-961b-a9101f3186d8\") " pod="kube-system/storage-provisioner"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825468    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0529046f-d42a-4351-9b49-2572866afd47-lib-modules\") pod \"kindnet-x7zb4\" (UID: \"0529046f-d42a-4351-9b49-2572866afd47\") " pod="kube-system/kindnet-x7zb4"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825507    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-xtables-lock\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.825548    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/038c238e-3e2b-4d31-a68c-64bf29863d8f-lib-modules\") pod \"kube-proxy-lcc9v\" (UID: \"038c238e-3e2b-4d31-a68c-64bf29863d8f\") " pod="kube-system/kube-proxy-lcc9v"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.825766    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.826086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.325968848 +0000 UTC m=+6.712927507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.838030    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-876600"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881247    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881299    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: E0624 12:49:43.881358    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:44.381339693 +0000 UTC m=+6.768298452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.886367    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-876600"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.900233    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e708d5cd73627b4d4daa56de34a8c4e" path="/var/lib/kubelet/pods/1e708d5cd73627b4d4daa56de34a8c4e/volumes"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.902231    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f659c666f2215840bd65758467c8d95f" path="/var/lib/kubelet/pods/f659c666f2215840bd65758467c8d95f/volumes"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 kubelet[1517]: I0624 12:49:43.955243    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-876600" podStartSLOduration=0.95522195 podStartE2EDuration="955.22195ms" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.954143273 +0000 UTC m=+6.341102032" watchObservedRunningTime="2024-06-24 12:49:43.95522195 +0000 UTC m=+6.342180609"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.009762    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-876600" podStartSLOduration=1.009741412 podStartE2EDuration="1.009741412s" podCreationTimestamp="2024-06-24 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-24 12:49:43.97249859 +0000 UTC m=+6.359457249" watchObservedRunningTime="2024-06-24 12:49:44.009741412 +0000 UTC m=+6.396700071"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: I0624 12:49:44.242033    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-876600" podUID="4906666c-eed2-4f7c-a011-5a9b589fdcdc"
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332476    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.332608    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.332586673 +0000 UTC m=+7.719545432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432880    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.432942    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 kubelet[1517]: E0624 12:49:44.433039    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:45.433019076 +0000 UTC m=+7.819977735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342759    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.631758   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.342957    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.342938282 +0000 UTC m=+9.729896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443838    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443898    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.443954    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:47.443936874 +0000 UTC m=+9.830895533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 kubelet[1517]: E0624 12:49:45.885774    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363414    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.363514    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.363496503 +0000 UTC m=+13.750455162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464741    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464805    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.464874    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:51.464854688 +0000 UTC m=+13.851813347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.885615    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.886796    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:47 multinode-876600 kubelet[1517]: E0624 12:49:47.921627    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887171    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:49 multinode-876600 kubelet[1517]: E0624 12:49:49.887539    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407511    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.407640    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.407621304 +0000 UTC m=+21.794579963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509093    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509198    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.509307    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:49:59.509286238 +0000 UTC m=+21.896244897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.885255    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:51 multinode-876600 kubelet[1517]: E0624 12:49:51.887050    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:52 multinode-876600 kubelet[1517]: E0624 12:49:52.922772    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.884799    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:53 multinode-876600 kubelet[1517]: E0624 12:49:53.885560    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.884746    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.632776   14012 command_runner.go:130] > Jun 24 12:49:55 multinode-876600 kubelet[1517]: E0624 12:49:55.885285    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.884831    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.891676    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:57 multinode-876600 kubelet[1517]: E0624 12:49:57.924490    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477230    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.477488    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.477469076 +0000 UTC m=+37.864427735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577409    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577519    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.577707    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:15.577682699 +0000 UTC m=+37.964641358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.885787    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:49:59 multinode-876600 kubelet[1517]: E0624 12:49:59.886423    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.884499    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:01 multinode-876600 kubelet[1517]: E0624 12:50:01.885179    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:02 multinode-876600 kubelet[1517]: E0624 12:50:02.926638    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.885239    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:03 multinode-876600 kubelet[1517]: E0624 12:50:03.886289    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.885743    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:05 multinode-876600 kubelet[1517]: E0624 12:50:05.886950    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.885504    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.886102    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:07 multinode-876600 kubelet[1517]: E0624 12:50:07.928432    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.885611    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:09 multinode-876600 kubelet[1517]: E0624 12:50:09.886730    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.885621    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:11 multinode-876600 kubelet[1517]: E0624 12:50:11.886895    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:12 multinode-876600 kubelet[1517]: E0624 12:50:12.930482    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.884826    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:13 multinode-876600 kubelet[1517]: E0624 12:50:13.886039    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.633782   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532258    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.532440    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume podName:921aea5c-15b7-4780-bd12-7d7eb82e97cc nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.532421815 +0000 UTC m=+69.919380474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/921aea5c-15b7-4780-bd12-7d7eb82e97cc-config-volume") pod "coredns-7db6d8ff4d-sq7g6" (UID: "921aea5c-15b7-4780-bd12-7d7eb82e97cc") : object "kube-system"/"coredns" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637739    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637886    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2j6r6 for pod default/busybox-fc5497c4f-ddhfw: object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.637965    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6 podName:bdf96c8c-7151-4ac5-9548-ee114ce02793 nodeName:}" failed. No retries permitted until 2024-06-24 12:50:47.637945031 +0000 UTC m=+70.024903790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6r6" (UniqueName: "kubernetes.io/projected/bdf96c8c-7151-4ac5-9548-ee114ce02793-kube-api-access-2j6r6") pod "busybox-fc5497c4f-ddhfw" (UID: "bdf96c8c-7151-4ac5-9548-ee114ce02793") : object "default"/"kube-root-ca.crt" not registered
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886049    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 kubelet[1517]: E0624 12:50:15.886518    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789398    1517 scope.go:117] "RemoveContainer" containerID="83a09faf1e2d5eebf4f2c598430b1f195ba6d8aa697fd8b4ee3946759d35d490"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: I0624 12:50:16.789770    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:16 multinode-876600 kubelet[1517]: E0624 12:50:16.789967    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(056be0f2-af5c-427e-961b-a9101f3186d8)\"" pod="kube-system/storage-provisioner" podUID="056be0f2-af5c-427e-961b-a9101f3186d8"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886193    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-sq7g6" podUID="921aea5c-15b7-4780-bd12-7d7eb82e97cc"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:17 multinode-876600 kubelet[1517]: E0624 12:50:17.886769    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-ddhfw" podUID="bdf96c8c-7151-4ac5-9548-ee114ce02793"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	I0624 05:50:59.634765   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	I0624 05:50:59.678758   14012 logs.go:123] Gathering logs for describe nodes ...
	I0624 05:50:59.678758   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0624 05:50:59.895215   14012 command_runner.go:130] > Name:               multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] > Roles:              control-plane
	I0624 05:50:59.895215   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0624 05:50:59.895215   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.895215   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.895215   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	I0624 05:50:59.895215   14012 command_runner.go:130] > Taints:             <none>
	I0624 05:50:59.895215   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.895215   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.895215   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600
	I0624 05:50:59.895215   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.895215   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:50:55 +0000
	I0624 05:50:59.895215   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.895215   14012 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0624 05:50:59.895215   14012 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0624 05:50:59.895215   14012 command_runner.go:130] >   MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0624 05:50:59.895744   14012 command_runner.go:130] >   DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0624 05:50:59.895744   14012 command_runner.go:130] >   PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0624 05:50:59.895744   14012 command_runner.go:130] >   Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	I0624 05:50:59.895744   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.895744   14012 command_runner.go:130] >   InternalIP:  172.31.217.139
	I0624 05:50:59.895872   14012 command_runner.go:130] >   Hostname:    multinode-876600
	I0624 05:50:59.895872   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.895872   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.895936   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.895936   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.895965   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.895965   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.895965   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.895965   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.896003   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.896003   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.896003   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.896003   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	I0624 05:50:59.896003   14012 command_runner.go:130] >   System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.896003   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.896003   14012 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0624 05:50:59.896003   14012 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0624 05:50:59.896003   14012 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.896003   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-ddhfw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 etcd-multinode-876600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kindnet-x7zb4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-876600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-876600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-proxy-lcc9v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-876600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0624 05:50:59.896003   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Resource           Requests     Limits
	I0624 05:50:59.896003   14012 command_runner.go:130] >   --------           --------     ------
	I0624 05:50:59.896003   14012 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:59.896003   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0624 05:50:59.896003   14012 command_runner.go:130] > Events:
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:59.896003   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:59.896003   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-876600 status is now: NodeReady
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.896528   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	I0624 05:50:59.896682   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	I0624 05:50:59.896682   14012 command_runner.go:130] > Name:               multinode-876600-m02
	I0624 05:50:59.896682   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:59.896682   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m02
	I0624 05:50:59.896682   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	I0624 05:50:59.896778   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.896840   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.896840   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.896961   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.896961   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	I0624 05:50:59.896961   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:59.896961   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:59.896961   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.897029   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.897029   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m02
	I0624 05:50:59.897029   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.897029   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	I0624 05:50:59.897029   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.897096   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:59.897096   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:59.897169   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.897169   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.897266   14012 command_runner.go:130] >   InternalIP:  172.31.221.199
	I0624 05:50:59.897266   14012 command_runner.go:130] >   Hostname:    multinode-876600-m02
	I0624 05:50:59.897289   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.897289   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.897289   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.897318   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.897318   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.897318   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	I0624 05:50:59.897318   14012 command_runner.go:130] >   System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.897318   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.897318   14012 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0624 05:50:59.897318   14012 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0624 05:50:59.897318   14012 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.897318   14012 command_runner.go:130] >   default                     busybox-fc5497c4f-vqhsz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0624 05:50:59.897318   14012 command_runner.go:130] >   kube-system                 kindnet-t9wzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0624 05:50:59.897318   14012 command_runner.go:130] >   kube-system                 kube-proxy-hjjs8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0624 05:50:59.897318   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:59.897318   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:59.897318   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:59.897318   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:59.897318   14012 command_runner.go:130] > Events:
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0624 05:50:59.897318   14012 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:59.897318   14012 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	I0624 05:50:59.897842   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	I0624 05:50:59.897842   14012 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	I0624 05:50:59.897842   14012 command_runner.go:130] > Name:               multinode-876600-m03
	I0624 05:50:59.897842   14012 command_runner.go:130] > Roles:              <none>
	I0624 05:50:59.897842   14012 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0624 05:50:59.897842   14012 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/hostname=multinode-876600-m03
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     kubernetes.io/os=linux
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/name=multinode-876600
	I0624 05:50:59.897939   14012 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0624 05:50:59.898048   14012 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0624 05:50:59.898048   14012 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0624 05:50:59.898110   14012 command_runner.go:130] > CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	I0624 05:50:59.898133   14012 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0624 05:50:59.898133   14012 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0624 05:50:59.898133   14012 command_runner.go:130] > Unschedulable:      false
	I0624 05:50:59.898162   14012 command_runner.go:130] > Lease:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   HolderIdentity:  multinode-876600-m03
	I0624 05:50:59.898162   14012 command_runner.go:130] >   AcquireTime:     <unset>
	I0624 05:50:59.898162   14012 command_runner.go:130] >   RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	I0624 05:50:59.898162   14012 command_runner.go:130] > Conditions:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0624 05:50:59.898162   14012 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0624 05:50:59.898162   14012 command_runner.go:130] > Addresses:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   InternalIP:  172.31.210.168
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Hostname:    multinode-876600-m03
	I0624 05:50:59.898162   14012 command_runner.go:130] > Capacity:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.898162   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.898162   14012 command_runner.go:130] > Allocatable:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   cpu:                2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   hugepages-2Mi:      0
	I0624 05:50:59.898162   14012 command_runner.go:130] >   memory:             2164264Ki
	I0624 05:50:59.898162   14012 command_runner.go:130] >   pods:               110
	I0624 05:50:59.898162   14012 command_runner.go:130] > System Info:
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	I0624 05:50:59.898162   14012 command_runner.go:130] >   System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kernel Version:             5.10.207
	I0624 05:50:59.898162   14012 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Operating System:           linux
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Architecture:               amd64
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kubelet Version:            v1.30.2
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Kube-Proxy Version:         v1.30.2
	I0624 05:50:59.898162   14012 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0624 05:50:59.898162   14012 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0624 05:50:59.898162   14012 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0624 05:50:59.898162   14012 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0624 05:50:59.898694   14012 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0624 05:50:59.898694   14012 command_runner.go:130] >   kube-system                 kindnet-9cfcv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0624 05:50:59.898757   14012 command_runner.go:130] >   kube-system                 kube-proxy-wf7jm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0624 05:50:59.898757   14012 command_runner.go:130] > Allocated resources:
	I0624 05:50:59.898757   14012 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   Resource           Requests   Limits
	I0624 05:50:59.898757   14012 command_runner.go:130] >   --------           --------   ------
	I0624 05:50:59.898757   14012 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0624 05:50:59.898757   14012 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:59.898849   14012 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0624 05:50:59.898849   14012 command_runner.go:130] > Events:
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0624 05:50:59.898873   14012 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0624 05:50:59.898873   14012 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0624 05:50:59.899026   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.899026   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:59.899065   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeReady                5m39s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	I0624 05:50:59.899106   14012 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	I0624 05:50:59.909496   14012 logs.go:123] Gathering logs for kube-scheduler [d7d8d18e1b11] ...
	I0624 05:50:59.909496   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d8d18e1b11"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:22.188709       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.692661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.692881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.693021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.693052       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.723742       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.725099       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727768       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! I0624 12:26:23.727883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.733417       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! E0624 12:26:23.734043       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.735465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.947382   14012 command_runner.go:130] ! E0624 12:26:23.735639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.947382   14012 command_runner.go:130] ! W0624 12:26:23.735886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.947927   14012 command_runner.go:130] ! E0624 12:26:23.736225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.947995   14012 command_runner.go:130] ! W0624 12:26:23.736258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.947995   14012 command_runner.go:130] ! E0624 12:26:23.736724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.948080   14012 command_runner.go:130] ! W0624 12:26:23.736138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948145   14012 command_runner.go:130] ! E0624 12:26:23.737192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948145   14012 command_runner.go:130] ! W0624 12:26:23.739149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948229   14012 command_runner.go:130] ! E0624 12:26:23.739192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.740856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.740889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.741068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! E0624 12:26:23.741177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948261   14012 command_runner.go:130] ! W0624 12:26:23.741257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.948789   14012 command_runner.go:130] ! E0624 12:26:23.741289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.602721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.602778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.639924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.640054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.715283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.716189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.781091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.781145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.781214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.948855   14012 command_runner.go:130] ! W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.949382   14012 command_runner.go:130] ! E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0624 05:50:59.949533   14012 command_runner.go:130] ! W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.949533   14012 command_runner.go:130] ! E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0624 05:50:59.949650   14012 command_runner.go:130] ! W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.949699   14012 command_runner.go:130] ! E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:50:59.949718   14012 command_runner.go:130] ! E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 05:50:59.949718   14012 command_runner.go:130] ! I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:50:59.960393   14012 logs.go:123] Gathering logs for Docker ...
	I0624 05:50:59.961288   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:10 minikube cri-dockerd[224]: time="2024-06-24T12:48:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube cri-dockerd[406]: time="2024-06-24T12:48:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube cri-dockerd[426]: time="2024-06-24T12:48:15Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:15 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.994381   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:48:17 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.884685548Z" level=info msg="Starting up"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.885788144Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[653]: time="2024-06-24T12:49:01.890036429Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.922365916Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944634637Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944729437Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944788537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.944805237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945278635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945368735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945514834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945640434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945659534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.945670033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946136832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.946895229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949750819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.949842219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952432710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.952525209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953030908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953149607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.953267007Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.958827487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959018586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959045186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959061886Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959079486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959154286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959410785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959525185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959680484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959715984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959729684Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959742184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959761984Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959776784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959789884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959801884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959814184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.995373   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959824784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959844984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959858684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959869883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959880983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959896983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.959908783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960018383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960035683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960048983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960062383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960072983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960101283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960113483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960127683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960146483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960176282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960187982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960231182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960272582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960288382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960300282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960309982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960338782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960352482Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960633681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960769280Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960841480Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:01 multinode-876600 dockerd[660]: time="2024-06-24T12:49:01.960881780Z" level=info msg="containerd successfully booted in 0.041519s"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:02 multinode-876600 dockerd[653]: time="2024-06-24T12:49:02.945262615Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.223804341Z" level=info msg="Loading containers: start."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.641218114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.732814019Z" level=info msg="Loading containers: done."
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.761576529Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.762342011Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812071919Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 dockerd[653]: time="2024-06-24T12:49:03.812157017Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:03 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 systemd[1]: Stopping Docker Application Container Engine...
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:28 multinode-876600 dockerd[653]: time="2024-06-24T12:49:28.997274494Z" level=info msg="Processing signal 'terminated'"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000124734Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000358529Z" level=info msg="Daemon shutdown complete"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000525626Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:29 multinode-876600 dockerd[653]: time="2024-06-24T12:49:29.000539625Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: docker.service: Deactivated successfully.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Stopped Docker Application Container Engine.
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 systemd[1]: Starting Docker Application Container Engine...
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.084737493Z" level=info msg="Starting up"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.086025466Z" level=info msg="containerd not running, starting managed containerd"
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:30.088389717Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0624 05:50:59.996371   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.114515172Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138093079Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138154078Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138196277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138211077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138233076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138243876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138358674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138453472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138476871Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138487571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138509871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.138632268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.140915820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141061017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141185215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141274813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141300312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141316712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141325912Z" level=info msg="metadata content store policy set" policy=shared
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141647505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141735203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141753803Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141765903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141776602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.141815002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142049497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142172394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142255792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142271792Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142283692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142301791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142314591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142325791Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142336891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142346891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142357190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142366690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142383590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142395790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142405789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142415889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142426189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142435889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.997394   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142444888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142455488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142466788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142481688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142491887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142501487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142510987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142523287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142539087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142549586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142558786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142594885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142678984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142693983Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142706083Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142715083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142729083Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.142738782Z" level=info msg="NRI interface is disabled by configuration."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143034976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143530866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143648463Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:30 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:30.143683163Z" level=info msg="containerd successfully booted in 0.030094s"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.133094709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.172693982Z" level=info msg="Loading containers: start."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.453078529Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.540592303Z" level=info msg="Loading containers: done."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567477241Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.567674037Z" level=info msg="Daemon has completed initialization"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.612862394Z" level=info msg="API listen on [::]:2376"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 dockerd[1044]: time="2024-06-24T12:49:31.613035490Z" level=info msg="API listen on /var/run/docker.sock"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:31 multinode-876600 systemd[1]: Started Docker Application Container Engine.
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start docker client with request timeout 0s"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Loaded network plugin cni"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:32Z" level=info msg="Start cri-dockerd grpc backend"
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:32 multinode-876600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:37 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-ddhfw_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ccbe4517423ff6ac148bbea9b31327ba57c576472daa7bb43f3abfec4fcc848e\""
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-sq7g6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b42fe71aa0d74dc4c8bf7efabb926744c79611d65f4b30764269d069cc74e988\""
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701849613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701941911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.701961911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.998372   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.702631897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749259723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749359121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749376421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.749483319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.857346667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858201149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858312947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.858668940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5a9d5aa43e22aa4468a78b6729a52c32332f466d9713f1fc1f22b3178bfdf3cb/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909591377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909669675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909686975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:38.909798272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:38 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dc882a855c977907ea1eb78d3d2623963c99ac563395c74ee791f4e4d6c67e5/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ee4c386584ddcaac187d918f5b8e6e90f6a1893747e28fcb452abdd3f3754cc/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd15388f44a90206722f05f67e83f1569bb1f23f2bc39ccecb544b68ee13fa32/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271174729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271239827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271279026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.271405024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285087638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285231435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285249735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.285350433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407441484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407629580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.407664579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.408230568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.451094973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.458080727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.473748300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:39 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:39.474517884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:43 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455255812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455325111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455337410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.455452908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524370569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:50:59.999371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524463867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.524791761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537549994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537617493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537629693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.537708691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/422468c35b2095c5a7248117288e532bf371b7f8311ccc927c4b3cec03ff9c00/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90d48427c423b7330f429e422fa4ae6d9609e425d64c4199b78ac90942abbd3c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.976892023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977043020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.977576709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:44 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:44.978477690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001225615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001462610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.001660406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.002175695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:49:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44a8a9ab355dd20864f0e8074da9092f9f15c5cede37fc2001601d98606049c/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.402910430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403402020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403504818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:49:45 multinode-876600 dockerd[1050]: time="2024-06-24T12:49:45.403958608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1044]: time="2024-06-24T12:50:15.730882144Z" level=info msg="ignoring event" container=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.000371   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.001370   14012 command_runner.go:130] > Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0624 05:51:00.032368   14012 logs.go:123] Gathering logs for dmesg ...
	I0624 05:51:00.032368   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.119067] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.019556] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.056836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.020537] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0624 05:51:00.056388   14012 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0624 05:51:00.056388   14012 command_runner.go:130] > [Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	I0624 05:51:00.056388   14012 command_runner.go:130] > [  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	I0624 05:51:00.057379   14012 command_runner.go:130] > [  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	I0624 05:51:00.058423   14012 logs.go:123] Gathering logs for coredns [f46bdc12472e] ...
	I0624 05:51:00.058423   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46bdc12472e"
	I0624 05:51:00.093374   14012 command_runner.go:130] > .:53
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	I0624 05:51:00.093374   14012 command_runner.go:130] > CoreDNS-1.11.1
	I0624 05:51:00.093374   14012 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 127.0.0.1:38468 - 10173 "HINFO IN 7379731890712669450.5580048866765570142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046871074s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:45037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:51655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.179407896s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:40053 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.0309719s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48757 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.044029328s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:37448 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244204s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:56655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000191903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:53194 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000903615s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:52602 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000202304s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:36063 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:59545 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025696712s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:51570 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161503s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245804s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:50843 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.020266425s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:54029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176103s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145603s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:44648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115802s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0624 05:51:00.093374   14012 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0624 05:51:00.096374   14012 logs.go:123] Gathering logs for kube-controller-manager [39d593f24d2b] ...
	I0624 05:51:00.096374   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d593f24d2b"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:41.611040       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.162381       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.162626       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.167365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.170015       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.170537       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:42.171222       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.131504       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.132688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.147920       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.148575       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.148592       1 shared_informer.go:313] Waiting for caches to sync for job
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168288       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168585       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.168603       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.174208       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.204857       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.205200       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0624 05:51:00.129080   14012 command_runner.go:130] ! I0624 12:49:45.205220       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208199       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208279       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208292       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.208682       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211075       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211337       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.211469       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212664       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212885       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.212921       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0624 05:51:00.129670   14012 command_runner.go:130] ! I0624 12:49:45.215407       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0624 05:51:00.129861   14012 command_runner.go:130] ! I0624 12:49:45.215514       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0624 05:51:00.129883   14012 command_runner.go:130] ! I0624 12:49:45.215556       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0624 05:51:00.129910   14012 command_runner.go:130] ! I0624 12:49:45.215770       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0624 05:51:00.129910   14012 command_runner.go:130] ! I0624 12:49:45.215858       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.232560       1 shared_informer.go:320] Caches are synced for tokens
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.270108       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.272041       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.272064       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0624 05:51:00.130011   14012 command_runner.go:130] ! I0624 12:49:45.275068       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0624 05:51:00.130080   14012 command_runner.go:130] ! I0624 12:49:45.277065       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0624 05:51:00.130080   14012 command_runner.go:130] ! I0624 12:49:45.277084       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284603       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284828       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0624 05:51:00.130122   14012 command_runner.go:130] ! I0624 12:49:45.284846       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0624 05:51:00.130168   14012 command_runner.go:130] ! I0624 12:49:45.284874       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284882       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284916       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284923       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284946       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.284952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285054       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285251       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.285306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287516       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287669       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287679       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.287687       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0624 05:51:00.130194   14012 command_runner.go:130] ! E0624 12:49:45.300773       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.300902       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.312613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.313106       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.313142       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322260       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322522       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.322577       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336372       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336549       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0624 05:51:00.130194   14012 command_runner.go:130] ! I0624 12:49:45.336561       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0624 05:51:00.130751   14012 command_runner.go:130] ! I0624 12:49:45.345594       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0624 05:51:00.130751   14012 command_runner.go:130] ! I0624 12:49:45.346399       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0624 05:51:00.130797   14012 command_runner.go:130] ! I0624 12:49:45.346569       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0624 05:51:00.130797   14012 command_runner.go:130] ! I0624 12:49:45.367646       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.367851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.367863       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.378165       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0624 05:51:00.130849   14012 command_runner.go:130] ! I0624 12:49:45.378901       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.379646       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.387114       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.390531       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0624 05:51:00.130920   14012 command_runner.go:130] ! I0624 12:49:45.389629       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0624 05:51:00.131002   14012 command_runner.go:130] ! I0624 12:49:45.390839       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.390877       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398432       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398651       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0624 05:51:00.131034   14012 command_runner.go:130] ! I0624 12:49:45.398662       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415213       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.415822       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.416603       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0624 05:51:00.131106   14012 command_runner.go:130] ! I0624 12:49:45.424702       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.425586       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.425764       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0624 05:51:00.131195   14012 command_runner.go:130] ! I0624 12:49:45.436755       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:45.437436       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:45.437459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.465615       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.465741       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0624 05:51:00.131258   14012 command_runner.go:130] ! I0624 12:49:55.467240       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0624 05:51:00.131322   14012 command_runner.go:130] ! I0624 12:49:55.467274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0624 05:51:00.131322   14012 command_runner.go:130] ! I0624 12:49:55.468497       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.469360       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.469377       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.471510       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0624 05:51:00.131382   14012 command_runner.go:130] ! I0624 12:49:55.472283       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.472444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.506782       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0624 05:51:00.131447   14012 command_runner.go:130] ! I0624 12:49:55.508139       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.509911       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.511130       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.511307       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.513825       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.514534       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.514594       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0624 05:51:00.131521   14012 command_runner.go:130] ! I0624 12:49:55.519187       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.519640       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.520911       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536120       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536258       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0624 05:51:00.131727   14012 command_runner.go:130] ! I0624 12:49:55.536357       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0624 05:51:00.131817   14012 command_runner.go:130] ! I0624 12:49:55.536487       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0624 05:51:00.131838   14012 command_runner.go:130] ! I0624 12:49:55.536563       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0624 05:51:00.131905   14012 command_runner.go:130] ! I0624 12:49:55.536711       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.536804       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.536933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537053       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537439       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537526       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537600       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537659       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537693       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537784       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.537942       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538071       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538183       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.538608       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.544968       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.545425       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.545485       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547347       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547559       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.547756       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.550357       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.550389       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! E0624 12:49:55.553426       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.553471       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.555656       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.556160       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0624 05:51:00.131964   14012 command_runner.go:130] ! I0624 12:49:55.556254       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0624 05:51:00.132493   14012 command_runner.go:130] ! I0624 12:49:55.558670       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0624 05:51:00.132493   14012 command_runner.go:130] ! I0624 12:49:55.559245       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.559312       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.561844       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.561894       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.562386       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0624 05:51:00.132543   14012 command_runner.go:130] ! I0624 12:49:55.563348       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0624 05:51:00.132634   14012 command_runner.go:130] ! I0624 12:49:55.563500       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.564944       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.565114       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.564958       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0624 05:51:00.132656   14012 command_runner.go:130] ! I0624 12:49:55.565487       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.579438       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.591124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132742   14012 command_runner.go:130] ! I0624 12:49:55.598082       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600\" does not exist"
	I0624 05:51:00.132810   14012 command_runner.go:130] ! I0624 12:49:55.598223       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 05:51:00.132810   14012 command_runner.go:130] ! I0624 12:49:55.598507       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132876   14012 command_runner.go:130] ! I0624 12:49:55.598710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 05:51:00.132935   14012 command_runner.go:130] ! I0624 12:49:55.599233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.132952   14012 command_runner.go:130] ! I0624 12:49:55.608238       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.618340       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.618519       1 shared_informer.go:320] Caches are synced for service account
	I0624 05:51:00.132979   14012 command_runner.go:130] ! I0624 12:49:55.624144       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0624 05:51:00.133042   14012 command_runner.go:130] ! I0624 12:49:55.636852       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0624 05:51:00.133042   14012 command_runner.go:130] ! I0624 12:49:55.637524       1 shared_informer.go:320] Caches are synced for TTL
	I0624 05:51:00.133069   14012 command_runner.go:130] ! I0624 12:49:55.646541       1 shared_informer.go:320] Caches are synced for daemon sets
	I0624 05:51:00.133102   14012 command_runner.go:130] ! I0624 12:49:55.649566       1 shared_informer.go:320] Caches are synced for job
	I0624 05:51:00.133144   14012 command_runner.go:130] ! I0624 12:49:55.657061       1 shared_informer.go:320] Caches are synced for endpoint
	I0624 05:51:00.133144   14012 command_runner.go:130] ! I0624 12:49:55.659468       1 shared_informer.go:320] Caches are synced for cronjob
	I0624 05:51:00.133188   14012 command_runner.go:130] ! I0624 12:49:55.664252       1 shared_informer.go:320] Caches are synced for taint
	I0624 05:51:00.133188   14012 command_runner.go:130] ! I0624 12:49:55.664599       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0624 05:51:00.133229   14012 command_runner.go:130] ! I0624 12:49:55.666260       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0624 05:51:00.133229   14012 command_runner.go:130] ! I0624 12:49:55.667638       1 shared_informer.go:320] Caches are synced for node
	I0624 05:51:00.133274   14012 command_runner.go:130] ! I0624 12:49:55.667809       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.668402       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.668345       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.670484       1 shared_informer.go:320] Caches are synced for HPA
	I0624 05:51:00.133316   14012 command_runner.go:130] ! I0624 12:49:55.670543       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0624 05:51:00.133380   14012 command_runner.go:130] ! I0624 12:49:55.673115       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0624 05:51:00.133406   14012 command_runner.go:130] ! I0624 12:49:55.673584       1 shared_informer.go:320] Caches are synced for PVC protection
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.673809       1 shared_informer.go:320] Caches are synced for namespace
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.677814       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.684929       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.685678       1 shared_informer.go:320] Caches are synced for ephemeral
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.691958       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697077       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697496       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.697524       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.698202       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.698711       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.705711       1 shared_informer.go:320] Caches are synced for expand
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.709368       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.713133       1 shared_informer.go:320] Caches are synced for disruption
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.713139       1 shared_informer.go:320] Caches are synced for GC
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.718286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.722094       1 shared_informer.go:320] Caches are synced for deployment
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.726359       1 shared_informer.go:320] Caches are synced for stateful set
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.730966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.629723ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.731762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.605µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.738505       1 shared_informer.go:320] Caches are synced for resource quota
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.739127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.613566ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.739715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.803µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 05:51:00.133436   14012 command_runner.go:130] ! I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 05:51:00.151228   14012 logs.go:123] Gathering logs for kindnet [404cdbe8e049] ...
	I0624 05:51:00.151355   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404cdbe8e049"
	I0624 05:51:00.182001   14012 command_runner.go:130] ! I0624 12:49:46.050915       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0624 05:51:00.182001   14012 command_runner.go:130] ! I0624 12:49:46.056731       1 main.go:107] hostIP = 172.31.217.139
	I0624 05:51:00.182001   14012 command_runner.go:130] ! podIP = 172.31.217.139
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.056908       1 main.go:116] setting mtu 1500 for CNI 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.056957       1 main.go:146] kindnetd IP family: "ipv4"
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:49:46.057261       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.444701       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.504533       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.504651       1 main.go:227] handling current node
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505618       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505690       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.505873       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.31.221.199 Flags: [] Table: 0} 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506079       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506099       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:16.506166       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.31.210.168 Flags: [] Table: 0} 
	I0624 05:51:00.183010   14012 command_runner.go:130] ! I0624 12:50:26.523420       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523536       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523551       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523559       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.523945       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:26.524012       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.537564       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538221       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538597       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.538771       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.539064       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:36.539185       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552158       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552252       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552265       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552272       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552712       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:46.552726       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565654       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565717       1 main.go:227] handling current node
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565730       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.565753       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.566419       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 05:51:00.183993   14012 command_runner.go:130] ! I0624 12:50:56.566456       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 05:51:00.186995   14012 logs.go:123] Gathering logs for kube-apiserver [d02d42ecc648] ...
	I0624 05:51:00.186995   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d02d42ecc648"
	I0624 05:51:00.215630   14012 command_runner.go:130] ! I0624 12:49:40.286095       1 options.go:221] external host was not specified, using 172.31.217.139
	I0624 05:51:00.215630   14012 command_runner.go:130] ! I0624 12:49:40.295605       1 server.go:148] Version: v1.30.2
	I0624 05:51:00.216454   14012 command_runner.go:130] ! I0624 12:49:40.295676       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.281015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.297083       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.299328       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.299550       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0624 05:51:00.216487   14012 command_runner.go:130] ! I0624 12:49:41.306069       1 instance.go:299] Using reconciler: lease
	I0624 05:51:00.216680   14012 command_runner.go:130] ! I0624 12:49:41.405217       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0624 05:51:00.216925   14012 command_runner.go:130] ! W0624 12:49:41.405825       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:41.829318       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:41.830077       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.148155       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.318694       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.350295       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.350434       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.350445       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.351427       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.351537       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.352903       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.353876       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.353968       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.354009       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.355665       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.355756       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.357405       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.357497       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.357508       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.358543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.358633       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.359043       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.360333       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.362922       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363103       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363118       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.363718       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363818       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! W0624 12:49:42.363828       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.216957   14012 command_runner.go:130] ! I0624 12:49:42.365198       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0624 05:51:00.217495   14012 command_runner.go:130] ! W0624 12:49:42.365216       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0624 05:51:00.217495   14012 command_runner.go:130] ! I0624 12:49:42.367128       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367222       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367232       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! I0624 12:49:42.367745       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367857       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! W0624 12:49:42.367867       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217544   14012 command_runner.go:130] ! I0624 12:49:42.370952       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.371093       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.371105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! I0624 12:49:42.372428       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! I0624 12:49:42.373872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.373966       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0624 05:51:00.217614   14012 command_runner.go:130] ! W0624 12:49:42.374041       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217705   14012 command_runner.go:130] ! I0624 12:49:42.380395       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0624 05:51:00.217705   14012 command_runner.go:130] ! W0624 12:49:42.380437       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0624 05:51:00.217705   14012 command_runner.go:130] ! W0624 12:49:42.380445       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0624 05:51:00.217790   14012 command_runner.go:130] ! I0624 12:49:42.383279       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0624 05:51:00.217815   14012 command_runner.go:130] ! W0624 12:49:42.383388       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.383399       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:42.384573       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.384717       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:42.400364       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0624 05:51:00.217847   14012 command_runner.go:130] ! W0624 12:49:42.400902       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.026954       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.027208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.027712       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028563       1 secure_serving.go:213] Serving securely on [::]:8443
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028945       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.028963       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.029941       1 aggregator.go:163] waiting for initial CRD sync...
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030691       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030768       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.030807       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.031185       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032162       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032207       1 controller.go:78] Starting OpenAPI AggregationController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032239       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032246       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032457       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.032964       1 available_controller.go:423] Starting AvailableConditionController
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.033084       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0624 05:51:00.217847   14012 command_runner.go:130] ! I0624 12:49:43.033207       1 controller.go:139] Starting OpenAPI controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033225       1 controller.go:116] Starting legacy_token_tracking_controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033375       1 controller.go:87] Starting OpenAPI V3 controller
	I0624 05:51:00.218380   14012 command_runner.go:130] ! I0624 12:49:43.033514       1 naming_controller.go:291] Starting NamingConditionController
	I0624 05:51:00.218441   14012 command_runner.go:130] ! I0624 12:49:43.033541       1 establishing_controller.go:76] Starting EstablishingController
	I0624 05:51:00.218441   14012 command_runner.go:130] ! I0624 12:49:43.033669       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033741       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033862       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0624 05:51:00.218483   14012 command_runner.go:130] ! I0624 12:49:43.033333       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0624 05:51:00.218556   14012 command_runner.go:130] ! I0624 12:49:43.034209       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.034287       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.035699       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.093771       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0624 05:51:00.218581   14012 command_runner.go:130] ! I0624 12:49:43.094094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.129432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 05:51:00.218686   14012 command_runner.go:130] ! I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 05:51:00.218748   14012 command_runner.go:130] ! I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 05:51:00.218806   14012 command_runner.go:130] ! I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 05:51:00.218828   14012 command_runner.go:130] ! I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0624 05:51:00.218856   14012 command_runner.go:130] ! W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0624 05:51:00.218856   14012 command_runner.go:130] ! W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	I0624 05:51:00.228150   14012 logs.go:123] Gathering logs for kube-scheduler [92813c7375dd] ...
	I0624 05:51:00.228298   14012 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92813c7375dd"
	I0624 05:51:00.255776   14012 command_runner.go:130] ! I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	I0624 05:51:00.255776   14012 command_runner.go:130] ! W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0624 05:51:00.256322   14012 command_runner.go:130] ! W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0624 05:51:00.256322   14012 command_runner.go:130] ! W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0624 05:51:00.256411   14012 command_runner.go:130] ! W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 05:51:00.256440   14012 command_runner.go:130] ! I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 05:51:00.256507   14012 command_runner.go:130] ! I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 05:51:02.761802   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:51:02.761802   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.761802   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.761802   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.766423   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.767425   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.767425   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Audit-Id: a5332d78-2dfa-41a7-a889-d3a1aa1e43bb
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.767425   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.767497   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.771045   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1968"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86620 chars]
	I0624 05:51:02.776186   14012 system_pods.go:59] 12 kube-system pods found
	I0624 05:51:02.776186   14012 system_pods.go:61] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:51:02.776186   14012 system_pods.go:61] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:51:02.776186   14012 system_pods.go:74] duration metric: took 3.7545293s to wait for pod list to return data ...
	I0624 05:51:02.776186   14012 default_sa.go:34] waiting for default service account to be created ...
	I0624 05:51:02.776186   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/default/serviceaccounts
	I0624 05:51:02.776186   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.776186   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.776186   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.779828   14012 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0624 05:51:02.779828   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.779828   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.779828   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.780669   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.780669   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Content-Length: 262
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.780669   14012 round_trippers.go:580]     Audit-Id: 3d4479b7-8e67-4bb1-8585-674b083d983a
	I0624 05:51:02.780669   14012 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b646e33d-a735-486e-bc23-8dd57a7f6b3f","resourceVersion":"332","creationTimestamp":"2024-06-24T12:26:40Z"}}]}
	I0624 05:51:02.781040   14012 default_sa.go:45] found service account: "default"
	I0624 05:51:02.781115   14012 default_sa.go:55] duration metric: took 4.8535ms for default service account to be created ...
	I0624 05:51:02.781115   14012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0624 05:51:02.781195   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/namespaces/kube-system/pods
	I0624 05:51:02.781285   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.781285   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.781285   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.785800   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.785800   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Audit-Id: 19416cc3-9eeb-4828-bbbb-377b2329c235
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.786565   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.786565   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.786565   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.790464   14012 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-sq7g6","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"921aea5c-15b7-4780-bd12-7d7eb82e97cc","resourceVersion":"1955","creationTimestamp":"2024-06-24T12:26:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3efee987-5f4a-4303-8933-6938ae34c633","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-24T12:26:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3efee987-5f4a-4303-8933-6938ae34c633\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86620 chars]
	I0624 05:51:02.793961   14012 system_pods.go:86] 12 kube-system pods found
	I0624 05:51:02.793961   14012 system_pods.go:89] "coredns-7db6d8ff4d-sq7g6" [921aea5c-15b7-4780-bd12-7d7eb82e97cc] Running
	I0624 05:51:02.793961   14012 system_pods.go:89] "etcd-multinode-876600" [c5bc6108-18d3-4bf9-8b39-a020f13cfefb] Running
	I0624 05:51:02.793961   14012 system_pods.go:89] "kindnet-9cfcv" [f9906062-7c73-46eb-a20d-afe17436fa32] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kindnet-t9wzm" [00450582-a600-4896-a8d9-d69a4c2c4241] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kindnet-x7zb4" [0529046f-d42a-4351-9b49-2572866afd47] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-apiserver-multinode-876600" [52a1504b-2338-458c-b448-92e8836b479a] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-controller-manager-multinode-876600" [ce6cdb16-15c7-48bf-9141-2e1a39212098] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-hjjs8" [1e148504-3300-4591-9576-7c5597851f41] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-lcc9v" [038c238e-3e2b-4d31-a68c-64bf29863d8f] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-proxy-wf7jm" [b4f99ace-bf94-40d8-b28f-27ec938418ef] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "kube-scheduler-multinode-876600" [90049cc9-8d7b-4f11-8126-038131eafec1] Running
	I0624 05:51:02.794861   14012 system_pods.go:89] "storage-provisioner" [056be0f2-af5c-427e-961b-a9101f3186d8] Running
	I0624 05:51:02.794861   14012 system_pods.go:126] duration metric: took 13.7458ms to wait for k8s-apps to be running ...
	I0624 05:51:02.794947   14012 system_svc.go:44] waiting for kubelet service to be running ....
	I0624 05:51:02.805870   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:51:02.834989   14012 system_svc.go:56] duration metric: took 40.042ms WaitForService to wait for kubelet
	I0624 05:51:02.834989   14012 kubeadm.go:576] duration metric: took 1m14.468494s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0624 05:51:02.834989   14012 node_conditions.go:102] verifying NodePressure condition ...
	I0624 05:51:02.834989   14012 round_trippers.go:463] GET https://172.31.217.139:8443/api/v1/nodes
	I0624 05:51:02.834989   14012 round_trippers.go:469] Request Headers:
	I0624 05:51:02.834989   14012 round_trippers.go:473]     Accept: application/json, */*
	I0624 05:51:02.834989   14012 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0624 05:51:02.839573   14012 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0624 05:51:02.839573   14012 round_trippers.go:577] Response Headers:
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Content-Type: application/json
	I0624 05:51:02.839878   14012 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd043b2e-b31a-49bd-8b52-37829c52e9a5
	I0624 05:51:02.839878   14012 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 042a8443-1676-403a-9c76-17b820b18d1c
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Date: Mon, 24 Jun 2024 12:51:02 GMT
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Audit-Id: 860307c7-6447-4cb4-be2d-617cc1db0fb0
	I0624 05:51:02.839878   14012 round_trippers.go:580]     Cache-Control: no-cache, private
	I0624 05:51:02.840393   14012 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1969"},"items":[{"metadata":{"name":"multinode-876600","uid":"19bbeae4-cccd-49f3-884b-1875eb12d0ae","resourceVersion":"1918","creationTimestamp":"2024-06-24T12:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-876600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"311081119e98d6eb0a16473abab8b278d38b85ec","minikube.k8s.io/name":"multinode-876600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_24T05_26_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0624 05:51:02.841358   14012 node_conditions.go:123] node cpu capacity is 2
	I0624 05:51:02.841358   14012 node_conditions.go:105] duration metric: took 6.3691ms to run NodePressure ...
	I0624 05:51:02.841358   14012 start.go:240] waiting for startup goroutines ...
	I0624 05:51:02.841358   14012 start.go:245] waiting for cluster config update ...
	I0624 05:51:02.841358   14012 start.go:254] writing updated cluster config ...
	I0624 05:51:02.845170   14012 out.go:177] 
	I0624 05:51:02.849288   14012 config.go:182] Loaded profile config "ha-340000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:51:02.858779   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:51:02.858779   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:02.864794   14012 out.go:177] * Starting "multinode-876600-m02" worker node in "multinode-876600" cluster
	I0624 05:51:02.866784   14012 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 05:51:02.866784   14012 cache.go:56] Caching tarball of preloaded images
	I0624 05:51:02.867783   14012 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0624 05:51:02.867783   14012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 05:51:02.867783   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:02.869782   14012 start.go:360] acquireMachinesLock for multinode-876600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0624 05:51:02.869782   14012 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-876600-m02"
	I0624 05:51:02.870783   14012 start.go:96] Skipping create...Using existing machine configuration
	I0624 05:51:02.870783   14012 fix.go:54] fixHost starting: m02
	I0624 05:51:02.870783   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:05.131447   14012 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 05:51:05.131533   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:05.131533   14012 fix.go:112] recreateIfNeeded on multinode-876600-m02: state=Stopped err=<nil>
	W0624 05:51:05.131533   14012 fix.go:138] unexpected machine state, will restart: <nil>
	I0624 05:51:05.135439   14012 out.go:177] * Restarting existing hyperv VM for "multinode-876600-m02" ...
	I0624 05:51:05.137552   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-876600-m02
	I0624 05:51:08.249994   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:08.251012   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:08.251012   14012 main.go:141] libmachine: Waiting for host to start...
	I0624 05:51:08.251052   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:10.531596   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:13.155592   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:13.155592   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:14.164598   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:16.457611   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:16.458354   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:16.458354   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:19.058407   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:19.059515   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:20.065836   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:22.305520   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:22.306282   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:22.306327   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:24.902710   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:24.903585   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:25.912870   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:28.210316   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:28.210316   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:28.210878   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:30.828602   14012 main.go:141] libmachine: [stdout =====>] : 
	I0624 05:51:30.828602   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:31.829668   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:34.127197   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:36.751199   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:36.751886   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:36.756189   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:38.934118   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:41.603473   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:41.603473   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:41.604055   14012 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600\config.json ...
	I0624 05:51:41.607858   14012 machine.go:94] provisionDockerMachine start ...
	I0624 05:51:41.607858   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:43.794910   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:43.794910   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:43.795709   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:46.399615   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:46.399615   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:46.405745   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:46.405745   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:46.405745   14012 main.go:141] libmachine: About to run SSH command:
	hostname
	I0624 05:51:46.549778   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0624 05:51:46.549899   14012 buildroot.go:166] provisioning hostname "multinode-876600-m02"
	I0624 05:51:46.550027   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:48.763165   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:48.763165   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:48.763296   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:51.423313   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:51.423313   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:51.430170   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:51.430767   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:51.430767   14012 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-876600-m02 && echo "multinode-876600-m02" | sudo tee /etc/hostname
	I0624 05:51:51.604140   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-876600-m02
	
	I0624 05:51:51.604757   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:53.816718   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:53.816718   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:53.816938   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:51:56.468229   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:51:56.468229   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:56.474316   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:51:56.474316   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:51:56.474938   14012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-876600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-876600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-876600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0624 05:51:56.632615   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0624 05:51:56.632679   14012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0624 05:51:56.632679   14012 buildroot.go:174] setting up certificates
	I0624 05:51:56.632679   14012 provision.go:84] configureAuth start
	I0624 05:51:56.632679   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:51:58.829129   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:51:58.829129   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:51:58.829859   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:01.481269   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:01.481507   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:01.481507   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:03.621112   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:03.621112   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:03.621523   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:06.179846   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:06.179846   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:06.179846   14012 provision.go:143] copyHostCerts
	I0624 05:52:06.179846   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0624 05:52:06.179846   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0624 05:52:06.179846   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0624 05:52:06.180681   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0624 05:52:06.181724   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0624 05:52:06.181887   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0624 05:52:06.181887   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0624 05:52:06.181887   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0624 05:52:06.183156   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0624 05:52:06.183156   14012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0624 05:52:06.183156   14012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0624 05:52:06.183829   14012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0624 05:52:06.184561   14012 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-876600-m02 san=[127.0.0.1 172.31.216.161 localhost minikube multinode-876600-m02]
	I0624 05:52:06.555920   14012 provision.go:177] copyRemoteCerts
	I0624 05:52:06.566778   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0624 05:52:06.567791   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:08.765561   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:08.765561   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:08.765974   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:11.398907   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:11.398907   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:11.399240   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:11.516095   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9492051s)
	I0624 05:52:11.516180   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0624 05:52:11.516180   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0624 05:52:11.569260   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0624 05:52:11.569390   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0624 05:52:11.619060   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0624 05:52:11.619566   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0624 05:52:11.672123   14012 provision.go:87] duration metric: took 15.0393878s to configureAuth
	I0624 05:52:11.672123   14012 buildroot.go:189] setting minikube options for container-runtime
	I0624 05:52:11.673119   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:52:11.673119   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:13.857294   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:13.857753   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:13.857753   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:16.502788   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:16.502788   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:16.510947   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:16.511487   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:16.511487   14012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0624 05:52:16.647937   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0624 05:52:16.647937   14012 buildroot.go:70] root file system type: tmpfs
	I0624 05:52:16.648636   14012 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0624 05:52:16.648694   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:18.786522   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:18.786522   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:18.786891   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:21.430377   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:21.431130   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:21.437655   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:21.438151   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:21.438299   14012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.31.217.139"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0624 05:52:21.611528   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.31.217.139
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0624 05:52:21.611749   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:23.820344   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:23.820344   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:23.820524   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:26.496862   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:26.496862   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:26.503204   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:26.503954   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:26.503954   14012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0624 05:52:28.827759   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0624 05:52:28.827759   14012 machine.go:97] duration metric: took 47.2197265s to provisionDockerMachine
	I0624 05:52:28.827759   14012 start.go:293] postStartSetup for "multinode-876600-m02" (driver="hyperv")
	I0624 05:52:28.827759   14012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0624 05:52:28.841025   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0624 05:52:28.841025   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:31.014936   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:33.661417   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:33.661684   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:33.661930   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:33.774693   14012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9336497s)
	I0624 05:52:33.787058   14012 ssh_runner.go:195] Run: cat /etc/os-release
	I0624 05:52:33.795230   14012 command_runner.go:130] > NAME=Buildroot
	I0624 05:52:33.795301   14012 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0624 05:52:33.795301   14012 command_runner.go:130] > ID=buildroot
	I0624 05:52:33.795301   14012 command_runner.go:130] > VERSION_ID=2023.02.9
	I0624 05:52:33.795301   14012 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0624 05:52:33.795301   14012 info.go:137] Remote host: Buildroot 2023.02.9
	I0624 05:52:33.795665   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0624 05:52:33.795913   14012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0624 05:52:33.797273   14012 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> 9442.pem in /etc/ssl/certs
	I0624 05:52:33.797333   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /etc/ssl/certs/9442.pem
	I0624 05:52:33.812639   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0624 05:52:33.834112   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /etc/ssl/certs/9442.pem (1708 bytes)
	I0624 05:52:33.885815   14012 start.go:296] duration metric: took 5.0580376s for postStartSetup
	I0624 05:52:33.885893   14012 fix.go:56] duration metric: took 1m31.0147735s for fixHost
	I0624 05:52:33.885989   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:36.065560   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:36.065806   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:36.065806   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:38.673023   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:38.673023   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:38.679746   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:38.680565   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:38.680565   14012 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0624 05:52:38.816943   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719233558.822874030
	
	I0624 05:52:38.816943   14012 fix.go:216] guest clock: 1719233558.822874030
	I0624 05:52:38.816943   14012 fix.go:229] Guest: 2024-06-24 05:52:38.82287403 -0700 PDT Remote: 2024-06-24 05:52:33.8858934 -0700 PDT m=+298.090752301 (delta=4.93698063s)
	I0624 05:52:38.816943   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:41.001196   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:41.001394   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:41.001461   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:43.566492   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:43.566492   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:43.572264   14012 main.go:141] libmachine: Using SSH client type: native
	I0624 05:52:43.572935   14012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x139a8e0] 0x139d4c0 <nil>  [] 0s} 172.31.216.161 22 <nil> <nil>}
	I0624 05:52:43.573188   14012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1719233558
	I0624 05:52:43.719003   14012 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun 24 12:52:38 UTC 2024
	
	I0624 05:52:43.719003   14012 fix.go:236] clock set: Mon Jun 24 12:52:38 UTC 2024
	 (err=<nil>)
	I0624 05:52:43.719003   14012 start.go:83] releasing machines lock for "multinode-876600-m02", held for 1m40.8488477s
	I0624 05:52:43.719003   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:45.915700   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:45.916319   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:45.916319   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:48.474008   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:48.474008   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:48.481343   14012 out.go:177] * Found network options:
	I0624 05:52:48.484144   14012 out.go:177]   - NO_PROXY=172.31.217.139
	W0624 05:52:48.486571   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:52:48.489238   14012 out.go:177]   - NO_PROXY=172.31.217.139
	W0624 05:52:48.491362   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	W0624 05:52:48.492747   14012 proxy.go:119] fail to check proxy env: Error ip not in block
	I0624 05:52:48.495118   14012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0624 05:52:48.495118   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:48.504964   14012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0624 05:52:48.504964   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:50.787108   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:50.787667   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:52:53.541317   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:53.541317   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:53.541570   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:53.571463   14012 main.go:141] libmachine: [stdout =====>] : 172.31.216.161
	
	I0624 05:52:53.571463   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:52:53.571888   14012 sshutil.go:53] new ssh client: &{IP:172.31.216.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:52:53.727256   14012 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0624 05:52:53.727405   14012 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0624 05:52:53.727405   14012 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2224213s)
	I0624 05:52:53.727405   14012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2322673s)
	W0624 05:52:53.727548   14012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0624 05:52:53.741131   14012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0624 05:52:53.772054   14012 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0624 05:52:53.772054   14012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0624 05:52:53.772167   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:52:53.772231   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:52:53.806892   14012 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0624 05:52:53.819759   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0624 05:52:53.851658   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0624 05:52:53.872776   14012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0624 05:52:53.887863   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0624 05:52:53.919896   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:52:53.955146   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0624 05:52:53.987735   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0624 05:52:54.022364   14012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0624 05:52:54.058683   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0624 05:52:54.094850   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0624 05:52:54.127556   14012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0624 05:52:54.160153   14012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0624 05:52:54.180312   14012 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0624 05:52:54.193573   14012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0624 05:52:54.228653   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:54.437761   14012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0624 05:52:54.471437   14012 start.go:494] detecting cgroup driver to use...
	I0624 05:52:54.485713   14012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0624 05:52:54.508214   14012 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0624 05:52:54.508214   14012 command_runner.go:130] > [Unit]
	I0624 05:52:54.508214   14012 command_runner.go:130] > Description=Docker Application Container Engine
	I0624 05:52:54.508214   14012 command_runner.go:130] > Documentation=https://docs.docker.com
	I0624 05:52:54.508214   14012 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0624 05:52:54.508214   14012 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0624 05:52:54.508214   14012 command_runner.go:130] > StartLimitBurst=3
	I0624 05:52:54.508214   14012 command_runner.go:130] > StartLimitIntervalSec=60
	I0624 05:52:54.508344   14012 command_runner.go:130] > [Service]
	I0624 05:52:54.508344   14012 command_runner.go:130] > Type=notify
	I0624 05:52:54.508344   14012 command_runner.go:130] > Restart=on-failure
	I0624 05:52:54.508466   14012 command_runner.go:130] > Environment=NO_PROXY=172.31.217.139
	I0624 05:52:54.508466   14012 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0624 05:52:54.508466   14012 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0624 05:52:54.508466   14012 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0624 05:52:54.508466   14012 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0624 05:52:54.508466   14012 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0624 05:52:54.508466   14012 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0624 05:52:54.508466   14012 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0624 05:52:54.508602   14012 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0624 05:52:54.508602   14012 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecStart=
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0624 05:52:54.508602   14012 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0624 05:52:54.508602   14012 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0624 05:52:54.508602   14012 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitNOFILE=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitNPROC=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > LimitCORE=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0624 05:52:54.508762   14012 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0624 05:52:54.508762   14012 command_runner.go:130] > TasksMax=infinity
	I0624 05:52:54.508762   14012 command_runner.go:130] > TimeoutStartSec=0
	I0624 05:52:54.508762   14012 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0624 05:52:54.508762   14012 command_runner.go:130] > Delegate=yes
	I0624 05:52:54.508762   14012 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0624 05:52:54.508909   14012 command_runner.go:130] > KillMode=process
	I0624 05:52:54.508909   14012 command_runner.go:130] > [Install]
	I0624 05:52:54.508909   14012 command_runner.go:130] > WantedBy=multi-user.target
	I0624 05:52:54.523324   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:52:54.561667   14012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0624 05:52:54.605022   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0624 05:52:54.640792   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:52:54.684770   14012 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0624 05:52:54.760274   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0624 05:52:54.786083   14012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0624 05:52:54.822073   14012 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0624 05:52:54.836364   14012 ssh_runner.go:195] Run: which cri-dockerd
	I0624 05:52:54.844010   14012 command_runner.go:130] > /usr/bin/cri-dockerd
	I0624 05:52:54.859952   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0624 05:52:54.880057   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0624 05:52:54.927694   14012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0624 05:52:55.159975   14012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0624 05:52:55.363797   14012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0624 05:52:55.363893   14012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0624 05:52:55.409772   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:55.606884   14012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0624 05:52:58.236153   14012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6291871s)
	I0624 05:52:58.249350   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0624 05:52:58.287725   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:52:58.322224   14012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0624 05:52:58.531177   14012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0624 05:52:58.733741   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:58.938496   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0624 05:52:58.984056   14012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0624 05:52:59.020806   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:52:59.229825   14012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0624 05:52:59.351125   14012 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0624 05:52:59.364217   14012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0624 05:52:59.373216   14012 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0624 05:52:59.373216   14012 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0624 05:52:59.373216   14012 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0624 05:52:59.373216   14012 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0624 05:52:59.373216   14012 command_runner.go:130] > Access: 2024-06-24 12:52:59.260247289 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] > Modify: 2024-06-24 12:52:59.260247289 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] > Change: 2024-06-24 12:52:59.264247281 +0000
	I0624 05:52:59.373216   14012 command_runner.go:130] >  Birth: -
	I0624 05:52:59.373216   14012 start.go:562] Will wait 60s for crictl version
	I0624 05:52:59.384201   14012 ssh_runner.go:195] Run: which crictl
	I0624 05:52:59.390214   14012 command_runner.go:130] > /usr/bin/crictl
	I0624 05:52:59.405008   14012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0624 05:52:59.472321   14012 command_runner.go:130] > Version:  0.1.0
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeName:  docker
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0624 05:52:59.472321   14012 command_runner.go:130] > RuntimeApiVersion:  v1
	I0624 05:52:59.472321   14012 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0624 05:52:59.481410   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:52:59.517651   14012 command_runner.go:130] > 26.1.4
	I0624 05:52:59.528512   14012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0624 05:52:59.564486   14012 command_runner.go:130] > 26.1.4
	I0624 05:52:59.568522   14012 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 26.1.4 ...
	I0624 05:52:59.571513   14012 out.go:177]   - env NO_PROXY=172.31.217.139
	I0624 05:52:59.574530   14012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0624 05:52:59.578474   14012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:67:04:41 Flags:up|broadcast|multicast|running}
	I0624 05:52:59.581476   14012 ip.go:210] interface addr: fe80::5869:a065:24c1:5db7/64
	I0624 05:52:59.581476   14012 ip.go:210] interface addr: 172.31.208.1/20
	I0624 05:52:59.593469   14012 ssh_runner.go:195] Run: grep 172.31.208.1	host.minikube.internal$ /etc/hosts
	I0624 05:52:59.599775   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.31.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:52:59.622147   14012 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:52:59.622996   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:52:59.623685   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:01.830731   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:01.830731   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:01.830823   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:01.831648   14012 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-876600 for IP: 172.31.216.161
	I0624 05:53:01.831709   14012 certs.go:194] generating shared ca certs ...
	I0624 05:53:01.831709   14012 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 05:53:01.832301   14012 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0624 05:53:01.832727   14012 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0624 05:53:01.832894   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0624 05:53:01.832935   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0624 05:53:01.833715   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem (1338 bytes)
	W0624 05:53:01.833715   14012 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944_empty.pem, impossibly tiny 0 bytes
	I0624 05:53:01.833715   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0624 05:53:01.834509   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0624 05:53:01.835389   14012 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem (1708 bytes)
	I0624 05:53:01.835389   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem -> /usr/share/ca-certificates/944.pem
	I0624 05:53:01.835389   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem -> /usr/share/ca-certificates/9442.pem
	I0624 05:53:01.835969   14012 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:01.836185   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0624 05:53:01.889410   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0624 05:53:01.946338   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0624 05:53:01.995283   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0624 05:53:02.046383   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\944.pem --> /usr/share/ca-certificates/944.pem (1338 bytes)
	I0624 05:53:02.094942   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9442.pem --> /usr/share/ca-certificates/9442.pem (1708 bytes)
	I0624 05:53:02.141874   14012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0624 05:53:02.203435   14012 ssh_runner.go:195] Run: openssl version
	I0624 05:53:02.212760   14012 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0624 05:53:02.225838   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/944.pem && ln -fs /usr/share/ca-certificates/944.pem /etc/ssl/certs/944.pem"
	I0624 05:53:02.262099   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/944.pem
	I0624 05:53:02.269865   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:53:02.269865   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 24 10:39 /usr/share/ca-certificates/944.pem
	I0624 05:53:02.284609   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/944.pem
	I0624 05:53:02.294976   14012 command_runner.go:130] > 51391683
	I0624 05:53:02.308604   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/944.pem /etc/ssl/certs/51391683.0"
	I0624 05:53:02.346804   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9442.pem && ln -fs /usr/share/ca-certificates/9442.pem /etc/ssl/certs/9442.pem"
	I0624 05:53:02.381782   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.389490   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.390324   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 24 10:39 /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.406339   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9442.pem
	I0624 05:53:02.415412   14012 command_runner.go:130] > 3ec20f2e
	I0624 05:53:02.430876   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9442.pem /etc/ssl/certs/3ec20f2e.0"
	I0624 05:53:02.470755   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0624 05:53:02.509023   14012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.517104   14012 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.517507   14012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 24 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.532647   14012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0624 05:53:02.541811   14012 command_runner.go:130] > b5213941
	I0624 05:53:02.554737   14012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0624 05:53:02.589160   14012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0624 05:53:02.595002   14012 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:53:02.596033   14012 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0624 05:53:02.596033   14012 kubeadm.go:928] updating node {m02 172.31.216.161 8443 v1.30.2 docker false true} ...
	I0624 05:53:02.596033   14012 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-876600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.31.216.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0624 05:53:02.610938   14012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0624 05:53:02.631186   14012 command_runner.go:130] > kubeadm
	I0624 05:53:02.631257   14012 command_runner.go:130] > kubectl
	I0624 05:53:02.631257   14012 command_runner.go:130] > kubelet
	I0624 05:53:02.631300   14012 binaries.go:44] Found k8s binaries, skipping transfer
	I0624 05:53:02.643970   14012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0624 05:53:02.664014   14012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0624 05:53:02.698068   14012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0624 05:53:02.743429   14012 ssh_runner.go:195] Run: grep 172.31.217.139	control-plane.minikube.internal$ /etc/hosts
	I0624 05:53:02.750413   14012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.31.217.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0624 05:53:02.790956   14012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0624 05:53:03.012241   14012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0624 05:53:03.040551   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:03.040624   14012 start.go:316] joinCluster: &{Name:multinode-876600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-876600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.31.217.139 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.31.216.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.31.210.168 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 05:53:03.040624   14012 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.31.216.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0624 05:53:03.040624   14012 host.go:66] Checking if "multinode-876600-m02" exists ...
	I0624 05:53:03.042127   14012 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:53:03.042821   14012 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:53:03.043517   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:05.260684   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:05.260743   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:05.260743   14012 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:53:05.261138   14012 api_server.go:166] Checking apiserver status ...
	I0624 05:53:05.274606   14012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:53:05.274606   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:53:07.470842   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:07.470842   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:07.471036   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:53:10.133303   14012 main.go:141] libmachine: [stdout =====>] : 172.31.217.139
	
	I0624 05:53:10.133303   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:10.133897   14012 sshutil.go:53] new ssh client: &{IP:172.31.217.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:53:10.254454   14012 command_runner.go:130] > 1846
	I0624 05:53:10.254544   14012 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9799197s)
	I0624 05:53:10.268784   14012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup
	W0624 05:53:10.286805   14012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0624 05:53:10.301053   14012 ssh_runner.go:195] Run: ls
	I0624 05:53:10.308709   14012 api_server.go:253] Checking apiserver healthz at https://172.31.217.139:8443/healthz ...
	I0624 05:53:10.316337   14012 api_server.go:279] https://172.31.217.139:8443/healthz returned 200:
	ok
	I0624 05:53:10.329149   14012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-876600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0624 05:53:10.492619   14012 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t9wzm, kube-system/kube-proxy-hjjs8
	I0624 05:53:13.514425   14012 command_runner.go:130] > node/multinode-876600-m02 cordoned
	I0624 05:53:13.514564   14012 command_runner.go:130] > pod "busybox-fc5497c4f-vqhsz" has DeletionTimestamp older than 1 seconds, skipping
	I0624 05:53:13.514564   14012 command_runner.go:130] > node/multinode-876600-m02 drained
	I0624 05:53:13.514564   14012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl drain multinode-876600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.185403s)
	I0624 05:53:13.514564   14012 node.go:128] successfully drained node "multinode-876600-m02"
	I0624 05:53:13.514812   14012 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0624 05:53:13.514950   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:53:15.702742   14012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:53:15.702844   14012 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:53:15.702844   14012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732338344Z" level=info msg="shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732817144Z" level=warning msg="cleaning up after shim disconnected" id=30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4 namespace=moby
	Jun 24 12:50:15 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:15.732926744Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.090792389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091479780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091556679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:31 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:31.091946174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150258850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150607644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.150763741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.151070436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159607879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159735976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159753776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.159954072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da/resolv.conf as [nameserver 172.31.208.1]"
	Jun 24 12:50:48 multinode-876600 cri-dockerd[1270]: time="2024-06-24T12:50:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.797923160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798133566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798150366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.798350172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831134223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831193325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831204625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 24 12:50:48 multinode-876600 dockerd[1050]: time="2024-06-24T12:50:48.831280228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30f4b1b02a0ba       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   d504c60c2a8ea       busybox-fc5497c4f-ddhfw
	b74d3be4b134f       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   8f638dcae3b23       coredns-7db6d8ff4d-sq7g6
	804c0aa053890       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   90d48427c423b       storage-provisioner
	404cdbe8e049d       ac1c61439df46                                                                                         4 minutes ago       Running             kindnet-cni               1                   e44a8a9ab355d       kindnet-x7zb4
	30fc6635cecf9       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   90d48427c423b       storage-provisioner
	d7311e3316b77       53c535741fb44                                                                                         4 minutes ago       Running             kube-proxy                1                   422468c35b209       kube-proxy-lcc9v
	7154c31f4e659       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   dd15388f44a90       etcd-multinode-876600
	d02d42ecc648a       56ce0fd9fb532                                                                                         4 minutes ago       Running             kube-apiserver            0                   5ee4c386584dd       kube-apiserver-multinode-876600
	92813c7375dd7       7820c83aa1394                                                                                         4 minutes ago       Running             kube-scheduler            1                   9dc882a855c97       kube-scheduler-multinode-876600
	39d593f24d2b3       e874818b3caac                                                                                         4 minutes ago       Running             kube-controller-manager   1                   5a9d5aa43e22a       kube-controller-manager-multinode-876600
	a30239c04d7d0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   ccbe4517423ff       busybox-fc5497c4f-ddhfw
	f46bdc12472e4       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   b42fe71aa0d74       coredns-7db6d8ff4d-sq7g6
	f74eb1beb274a       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              27 minutes ago      Exited              kindnet-cni               0                   2f2af473df8ad       kindnet-x7zb4
	b0dd966ee710f       53c535741fb44                                                                                         27 minutes ago      Exited              kube-proxy                0                   d072caca08610       kube-proxy-lcc9v
	7174bdea66e24       e874818b3caac                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   0449d7721b5b2       kube-controller-manager-multinode-876600
	d7d8d18e1b115       7820c83aa1394                                                                                         27 minutes ago      Exited              kube-scheduler            0                   6184b2eb79fd8       kube-scheduler-multinode-876600
	
	
	==> coredns [b74d3be4b134] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3bbd098fc214dc6dfa00c568b7eace025b603ea701d85ff6422fce82c71ce8b3031aaaf62adfe342d1a3f5f0bf1be6f08c4386d35c48cea8ace4e1727588bef9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53773 - 21703 "HINFO IN 7109432315850448437.2649371426144551600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.129915433s
	
	
	==> coredns [f46bdc12472e] <==
	[INFO] 10.244.1.2:42567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	[INFO] 10.244.1.2:33282 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071801s
	[INFO] 10.244.1.2:46897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058201s
	[INFO] 10.244.1.2:39580 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055401s
	[INFO] 10.244.1.2:50077 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076501s
	[INFO] 10.244.1.2:54026 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088201s
	[INFO] 10.244.1.2:33254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153902s
	[INFO] 10.244.0.3:55623 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076201s
	[INFO] 10.244.0.3:49262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181203s
	[INFO] 10.244.0.3:40361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082602s
	[INFO] 10.244.0.3:33420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153903s
	[INFO] 10.244.1.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192403s
	[INFO] 10.244.1.2:51621 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097001s
	[INFO] 10.244.1.2:49305 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149603s
	[INFO] 10.244.1.2:53850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062401s
	[INFO] 10.244.0.3:42757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107102s
	[INFO] 10.244.0.3:52658 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203903s
	[INFO] 10.244.0.3:36517 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000168302s
	[INFO] 10.244.0.3:48282 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168203s
	[INFO] 10.244.1.2:55667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114902s
	[INFO] 10.244.1.2:54799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112502s
	[INFO] 10.244.1.2:52760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051601s
	[INFO] 10.244.1.2:40971 - 5 "PTR IN 1.208.31.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056601s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-876600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-876600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=multinode-876600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_24T05_26_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 12:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-876600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 12:53:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Jun 2024 12:50:24 +0000   Mon, 24 Jun 2024 12:50:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.31.217.139
	  Hostname:    multinode-876600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fe05d772b7042bfbd8e2f0ec2c5948b
	  System UUID:                ea9911c3-b7a0-5f4f-876f-a36df94d6384
	  Boot ID:                    bd9891c7-7702-4926-9d82-4f4ac854b116
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ddhfw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-sq7g6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-876600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kindnet-x7zb4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-876600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-multinode-876600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-lcc9v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-876600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-876600 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m12s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m12s)  kubelet          Node multinode-876600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m12s)  kubelet          Node multinode-876600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m54s                  node-controller  Node multinode-876600 event: Registered Node multinode-876600 in Controller
	
	
	Name:               multinode-876600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-876600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=multinode-876600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T05_29_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 12:29:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	                    node.kubernetes.io/unschedulable:NoSchedule
	Unschedulable:      true
	Lease:
	  HolderIdentity:  multinode-876600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 12:46:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 24 Jun 2024 12:46:01 +0000   Mon, 24 Jun 2024 12:50:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.31.221.199
	  Hostname:    multinode-876600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ab9750fd59c4f058bacdb88dd2b8b45
	  System UUID:                2eaa2289-552f-c543-95fb-d97a58bb1b1e
	  Boot ID:                    e896ee7f-12fa-4e27-930d-54fb1506fe8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vqhsz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-t9wzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-hjjs8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-876600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-876600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-876600-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m54s              node-controller  Node multinode-876600-m02 event: Registered Node multinode-876600-m02 in Controller
	  Normal  NodeNotReady             3m14s              node-controller  Node multinode-876600-m02 status is now: NodeNotReady
	
	
	Name:               multinode-876600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-876600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=311081119e98d6eb0a16473abab8b278d38b85ec
	                    minikube.k8s.io/name=multinode-876600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_24T05_45_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Jun 2024 12:45:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-876600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Jun 2024 12:46:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 24 Jun 2024 12:45:20 +0000   Mon, 24 Jun 2024 12:46:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.31.210.168
	  Hostname:    multinode-876600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d425f339581a49cc8a785fa3c60d8e79
	  System UUID:                c16ec6c9-54f1-534b-a4e9-1b7a54fa897e
	  Boot ID:                    8889e825-9484-4dd9-81b2-35c5ce1781aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9cfcv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-wf7jm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m33s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-876600-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m37s (x2 over 8m37s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x2 over 8m37s)  kubelet          Node multinode-876600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x2 over 8m37s)  kubelet          Node multinode-876600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m34s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	  Normal  NodeReady                8m29s                  kubelet          Node multinode-876600-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m53s                  node-controller  Node multinode-876600-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m54s                  node-controller  Node multinode-876600-m03 event: Registered Node multinode-876600-m03 in Controller
	
	
	==> dmesg <==
	[  +1.309498] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.056200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.937876] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun24 12:49] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.092490] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.067429] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[ +26.139271] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[  +0.105333] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.522794] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +0.190312] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.205662] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	[  +2.903832] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +0.180001] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.178728] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.258644] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	[  +0.878961] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.094554] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.131902] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +1.287130] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.849895] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.827840] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	[  +7.262226] kauditd_printk_skb: 70 callbacks suppressed
	[Jun24 12:52] hrtimer: interrupt took 2989134 ns
	
	
	==> etcd [7154c31f4e65] <==
	{"level":"info","ts":"2024-06-24T12:49:39.969551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"950c92330396b402","local-member-id":"e5aae37eb5b537b7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:49:39.969959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-24T12:49:39.973554Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e5aae37eb5b537b7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-24T12:49:39.97415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-24T12:49:39.974575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-24T12:49:39.974693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-24T12:49:39.978119Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-24T12:49:39.979454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e5aae37eb5b537b7","initial-advertise-peer-urls":["https://172.31.217.139:2380"],"listen-peer-urls":["https://172.31.217.139:2380"],"advertise-client-urls":["https://172.31.217.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.31.217.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-24T12:49:39.979711Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-24T12:49:39.978158Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.31.217.139:2380"}
	{"level":"info","ts":"2024-06-24T12:49:39.98007Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.31.217.139:2380"}
	{"level":"info","ts":"2024-06-24T12:49:40.911048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-24T12:49:40.911362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-24T12:49:40.911517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgPreVoteResp from e5aae37eb5b537b7 at term 2"}
	{"level":"info","ts":"2024-06-24T12:49:40.912472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became candidate at term 3"}
	{"level":"info","ts":"2024-06-24T12:49:40.91266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 received MsgVoteResp from e5aae37eb5b537b7 at term 3"}
	{"level":"info","ts":"2024-06-24T12:49:40.912893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5aae37eb5b537b7 became leader at term 3"}
	{"level":"info","ts":"2024-06-24T12:49:40.913147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5aae37eb5b537b7 elected leader e5aae37eb5b537b7 at term 3"}
	{"level":"info","ts":"2024-06-24T12:49:40.923242Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e5aae37eb5b537b7","local-member-attributes":"{Name:multinode-876600 ClientURLs:[https://172.31.217.139:2379]}","request-path":"/0/members/e5aae37eb5b537b7/attributes","cluster-id":"950c92330396b402","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-24T12:49:40.92354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T12:49:40.926068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-24T12:49:40.937121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-24T12:49:40.937287Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-24T12:49:40.946303Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-24T12:49:40.946944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.31.217.139:2379"}
	
	
	==> kernel <==
	 12:53:49 up 5 min,  0 users,  load average: 0.23, 0.34, 0.18
	Linux multinode-876600 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [404cdbe8e049] <==
	I0624 12:53:06.720644       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:53:16.736819       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 12:53:16.736926       1 main.go:227] handling current node
	I0624 12:53:16.736942       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:53:16.736950       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:53:16.737884       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:53:16.738005       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:53:26.758524       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 12:53:26.758554       1 main.go:227] handling current node
	I0624 12:53:26.758568       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:53:26.758574       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:53:26.759495       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:53:26.759521       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:53:36.774920       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 12:53:36.775376       1 main.go:227] handling current node
	I0624 12:53:36.775638       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:53:36.775822       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:53:36.776291       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:53:36.776382       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:53:46.783193       1 main.go:223] Handling node with IPs: map[172.31.217.139:{}]
	I0624 12:53:46.783242       1 main.go:227] handling current node
	I0624 12:53:46.783256       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:53:46.783263       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:53:46.783389       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:53:46.783421       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f74eb1beb274] <==
	I0624 12:46:31.361781       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:46:41.375212       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:46:41.375306       1 main.go:227] handling current node
	I0624 12:46:41.375321       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:46:41.375345       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:46:41.375701       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:46:41.375941       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:46:51.386531       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:46:51.386634       1 main.go:227] handling current node
	I0624 12:46:51.386648       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:46:51.386656       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:46:51.386896       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:46:51.386965       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:47:01.413229       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:47:01.413382       1 main.go:227] handling current node
	I0624 12:47:01.413399       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:47:01.413411       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:47:01.413774       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:47:01.413842       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	I0624 12:47:11.423158       1 main.go:223] Handling node with IPs: map[172.31.211.219:{}]
	I0624 12:47:11.423298       1 main.go:227] handling current node
	I0624 12:47:11.423313       1 main.go:223] Handling node with IPs: map[172.31.221.199:{}]
	I0624 12:47:11.423321       1 main.go:250] Node multinode-876600-m02 has CIDR [10.244.1.0/24] 
	I0624 12:47:11.423484       1 main.go:223] Handling node with IPs: map[172.31.210.168:{}]
	I0624 12:47:11.423516       1 main.go:250] Node multinode-876600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d02d42ecc648] <==
	I0624 12:49:43.138040       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0624 12:49:43.138517       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0624 12:49:43.139618       1 shared_informer.go:320] Caches are synced for configmaps
	I0624 12:49:43.139844       1 aggregator.go:165] initial CRD sync complete...
	I0624 12:49:43.140029       1 autoregister_controller.go:141] Starting autoregister controller
	I0624 12:49:43.140166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0624 12:49:43.140302       1 cache.go:39] Caches are synced for autoregister controller
	I0624 12:49:43.182488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0624 12:49:43.199846       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0624 12:49:43.199918       1 policy_source.go:224] refreshing policies
	I0624 12:49:43.214899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0624 12:49:43.231803       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0624 12:49:43.231837       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0624 12:49:43.233458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0624 12:49:43.234661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0624 12:49:44.039058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0624 12:49:44.670612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.211.219 172.31.217.139]
	I0624 12:49:44.674243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0624 12:49:44.685319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0624 12:49:46.213764       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0624 12:49:46.521931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0624 12:49:46.548565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0624 12:49:46.691774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0624 12:49:46.705947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0624 12:50:04.661419       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.31.217.139]
	
	
	==> kube-controller-manager [39d593f24d2b] <==
	I0624 12:49:55.749061       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0624 12:49:55.749072       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0624 12:49:55.786539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0624 12:49:55.786791       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0624 12:49:55.787210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0624 12:49:55.787196       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0624 12:49:55.870299       1 shared_informer.go:320] Caches are synced for attach detach
	I0624 12:49:55.913033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0624 12:49:55.913175       1 shared_informer.go:320] Caches are synced for PV protection
	I0624 12:49:56.316948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 12:49:56.317045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0624 12:49:56.319592       1 shared_informer.go:320] Caches are synced for garbage collector
	I0624 12:50:24.215245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:50:35.774425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.630194ms"
	I0624 12:50:35.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.199µs"
	I0624 12:50:49.650744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.600477ms"
	I0624 12:50:49.651070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.305µs"
	I0624 12:50:49.680730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.005µs"
	I0624 12:50:49.750213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.078435ms"
	I0624 12:50:49.750756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.906µs"
	I0624 12:53:10.544227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.329437ms"
	I0624 12:53:10.544813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.102µs"
	I0624 12:53:10.564198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.057699ms"
	I0624 12:53:10.564739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.902µs"
	I0624 12:53:10.565154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.901µs"
	
	
	==> kube-controller-manager [7174bdea66e2] <==
	I0624 12:29:41.843061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m02\" does not exist"
	I0624 12:29:41.900661       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m02" podCIDRs=["10.244.1.0/24"]
	I0624 12:29:45.647065       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m02"
	I0624 12:30:00.589471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:30:26.654954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.879067ms"
	I0624 12:30:26.679609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.540052ms"
	I0624 12:30:26.680674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.501µs"
	I0624 12:30:26.681695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0624 12:30:26.694871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.7µs"
	I0624 12:30:30.076629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.403765ms"
	I0624 12:30:30.088382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="368.006µs"
	I0624 12:30:30.182296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.931593ms"
	I0624 12:30:30.183602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.8µs"
	I0624 12:34:19.437825       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 12:34:19.440752       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:34:19.481713       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.2.0/24"]
	I0624 12:34:20.727860       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-876600-m03"
	I0624 12:34:38.337938       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 12:42:20.859731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:45:06.793956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:45:12.557170       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-876600-m03\" does not exist"
	I0624 12:45:12.566511       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	I0624 12:45:12.575650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-876600-m03" podCIDRs=["10.244.3.0/24"]
	I0624 12:45:20.406996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m03"
	I0624 12:46:56.006931       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-876600-m02"
	
	
	==> kube-proxy [b0dd966ee710] <==
	I0624 12:26:42.526977       1 server_linux.go:69] "Using iptables proxy"
	I0624 12:26:42.552892       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.211.219"]
	I0624 12:26:42.633780       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 12:26:42.633879       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 12:26:42.633906       1 server_linux.go:165] "Using iptables Proxier"
	I0624 12:26:42.638370       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 12:26:42.639025       1 server.go:872] "Version info" version="v1.30.2"
	I0624 12:26:42.639261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 12:26:42.641342       1 config.go:192] "Starting service config controller"
	I0624 12:26:42.641520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 12:26:42.641724       1 config.go:101] "Starting endpoint slice config controller"
	I0624 12:26:42.642242       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 12:26:42.643204       1 config.go:319] "Starting node config controller"
	I0624 12:26:42.644759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 12:26:42.742276       1 shared_informer.go:320] Caches are synced for service config
	I0624 12:26:42.742435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 12:26:42.745166       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d7311e3316b7] <==
	I0624 12:49:46.142798       1 server_linux.go:69] "Using iptables proxy"
	I0624 12:49:46.182092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.31.217.139"]
	I0624 12:49:46.325927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0624 12:49:46.326925       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0624 12:49:46.327056       1 server_linux.go:165] "Using iptables Proxier"
	I0624 12:49:46.334590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0624 12:49:46.334905       1 server.go:872] "Version info" version="v1.30.2"
	I0624 12:49:46.334923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 12:49:46.338563       1 config.go:192] "Starting service config controller"
	I0624 12:49:46.339709       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0624 12:49:46.339782       1 config.go:101] "Starting endpoint slice config controller"
	I0624 12:49:46.339791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0624 12:49:46.341487       1 config.go:319] "Starting node config controller"
	I0624 12:49:46.341700       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0624 12:49:46.441401       1 shared_informer.go:320] Caches are synced for service config
	I0624 12:49:46.440901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0624 12:49:46.442285       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92813c7375dd] <==
	I0624 12:49:40.551463       1 serving.go:380] Generated self-signed cert in-memory
	W0624 12:49:43.095363       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0624 12:49:43.095694       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0624 12:49:43.095892       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0624 12:49:43.096084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0624 12:49:43.145037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0624 12:49:43.145132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0624 12:49:43.149883       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0624 12:49:43.150061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0624 12:49:43.150287       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0624 12:49:43.150080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0624 12:49:43.250844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d7d8d18e1b11] <==
	W0624 12:26:24.806457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0624 12:26:24.806503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0624 12:26:24.827344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0624 12:26:24.827580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0624 12:26:25.007400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.007726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.011246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.011668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.110081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0624 12:26:25.110457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0624 12:26:25.125513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0624 12:26:25.125981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0624 12:26:25.216762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0624 12:26:25.217161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0624 12:26:25.241335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0624 12:26:25.241697       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0624 12:26:25.287039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0624 12:26:25.287255       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0624 12:26:25.308454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0624 12:26:25.308497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0624 12:26:27.431118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0624 12:47:14.284750       1 run.go:74] "command failed" err="finished without leader elect"
	I0624 12:47:14.287472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0624 12:47:14.287550       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0624 12:47:14.287869       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 24 12:50:30 multinode-876600 kubelet[1517]: I0624 12:50:30.885629    1517 scope.go:117] "RemoveContainer" containerID="30fc6635cecf9d50193e72291ce2f55a7783b1e8ce04f4dc8ff5ff9d97d5b8c4"
	Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.870863    1517 scope.go:117] "RemoveContainer" containerID="d781e9872808b4e1e97e4787b189799fc9139f312f16c13df21f0f04958beef4"
	Jun 24 12:50:37 multinode-876600 kubelet[1517]: I0624 12:50:37.919489    1517 scope.go:117] "RemoveContainer" containerID="eefbf63a6c05b1cea86534b5d6bbe83646f25afd662c669130893b02f6b116ad"
	Jun 24 12:50:37 multinode-876600 kubelet[1517]: E0624 12:50:37.922720    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:50:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:50:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:50:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:50:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.525846    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d504c60c2a8ea25bc94bf6336861a54bdc2e090f03f1bd7587695e391a21f80c"
	Jun 24 12:50:48 multinode-876600 kubelet[1517]: I0624 12:50:48.576897    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f638dcae3b23dc25c269b171b23b7a480b330841512d4d79ddd7304e8a551da"
	Jun 24 12:51:37 multinode-876600 kubelet[1517]: E0624 12:51:37.921513    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:51:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:51:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:51:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:51:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:52:37 multinode-876600 kubelet[1517]: E0624 12:52:37.920701    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:52:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:52:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:52:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:52:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 24 12:53:37 multinode-876600 kubelet[1517]: E0624 12:53:37.923773    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 24 12:53:37 multinode-876600 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 24 12:53:37 multinode-876600 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 24 12:53:37 multinode-876600 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 24 12:53:37 multinode-876600 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:53:38.642982    5888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-876600 -n multinode-876600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-876600 -n multinode-876600: (12.342259s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-876600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-6zzdc
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-876600 describe pod busybox-fc5497c4f-6zzdc
helpers_test.go:282: (dbg) kubectl --context multinode-876600 describe pod busybox-fc5497c4f-6zzdc:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-6zzdc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpj79 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rpj79:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  60s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (492.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (10800.298s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2013884555.exe start -p running-upgrade-231000 --memory=2200 --vm-driver=hyperv
E0624 06:13:21.875762     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2013884555.exe start -p running-upgrade-231000 --memory=2200 --vm-driver=hyperv: (8m18.6790812s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdFlag (1m28s)
	TestKubernetesUpgrade (8m20s)
	TestRunningBinaryUpgrade (8m20s)
	TestStoppedBinaryUpgrade (3m7s)
	TestStoppedBinaryUpgrade/Upgrade (3m6s)

                                                
                                                
goroutine 1811 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000898340, 0xc0007b1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0001803c0, {0x4b11060, 0x2a, 0x2a}, {0x2745171?, 0x5880af?, 0x4b34340?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0001eb360)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0001eb360)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000597c00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 882 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x1c9fad87668, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x4dfdd6?, 0x4bc17a0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0015cd420, 0xc00093bbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0015cd408, 0x2f4, {0xc00087a000?, 0x0?, 0x0?}, 0xc00092c008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0015cd408, 0xc00093bd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0015cd408)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00069e500)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00069e500)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008ae0f0, {0x3763d80, 0xc00069e500})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008ae0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00142e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 815
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 42 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.0/klog.go:1175 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 41
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.0/klog.go:1171 +0x171

                                                
                                                
goroutine 168 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0006fac50, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x21de640?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b22420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006fac80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008d2000, {0x374d200, 0xc00077e630}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008d2000, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1799 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc001489b20?, 0x23afd28?, 0xc001489b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc001489bf8?, 0x4d281b?, 0x1c9f5920eb8?, 0x4d?, 0x4c8ba6?, 0x548b6a?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b8, {0xc0006a0dd8?, 0x228, 0x5841bf?}, 0xc000940008?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a2c788?, {0xc0006a0dd8?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a2c788, {0xc0006a0dd8, 0x228, 0x228})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006f61c8, {0xc0006a0dd8?, 0xc001489d98?, 0x72?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0005b4e70, {0x374bdc0, 0xc000932020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc0005b4e70}, {0x374bdc0, 0xc000932020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x374bf00, 0xc0005b4e70})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d0c36?, {0x374bf00?, 0xc0005b4e70?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc0005b4e70}, {0x374be80, 0xc0006f61c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x5f70616320656d69?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1728
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 156 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b22540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 157 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006fac80, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 703 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001524820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001524820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc001524820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc001524820, 0x31f6b08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 169 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3770cb0, 0xc000054420}, 0xc0008f5f50, 0xc0008f5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3770cb0, 0xc000054420}, 0x80?, 0xc0008f5f50, 0xc0008f5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3770cb0?, 0xc000054420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x65e425?, 0xc000ac1ce0?, 0xc000055380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 170 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1810 [select]:
os/exec.(*Cmd).watchCtx(0xc0005ee000, 0xc000594960)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 802
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1728 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdc5224de0?, {0xc00140d960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x668, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000736750)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00088c580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00088c580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00142e9c0, 0xc00088c580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00142e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc00142e9c0, 0x31f6c10)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1710 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0008f7b20?, 0xc00075b800?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0008f7b98?, 0x4dac05?, 0xc0008f7bf8?, 0x4d281b?, 0xc0008f7c58?, 0x7ee93e?, 0xc000940388?, 0x3770917?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x678, {0xc00096f000?, 0x200, 0x0?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0015cc008?, {0xc00096f000?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0015cc008, {0xc00096f000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006823b8, {0xc00096f000?, 0x4d281b?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001652300, {0x374bdc0, 0xc000932e80})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc001652300}, {0x374bdc0, 0xc000932e80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000003e50?, {0x374bf00, 0xc001652300})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0019d0a80?, {0x374bf00?, 0xc001652300?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc001652300}, {0x374be80, 0xc0006823b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000003e00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1794
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1629 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001524d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001524d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001524d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc001524d00, 0x31f6be8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1746 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffdc5224de0?, {0xc001411798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x620, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0015bed50)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ac1600)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000ac1600)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00142ed00, 0xc000ac1600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00142ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaff
testing.tRunner(0xc00142ed00, 0x31f6bb0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1801 [select]:
os/exec.(*Cmd).watchCtx(0xc00088c580, 0xc0007862a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1728
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1729 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00142eb60, {0x26ed1de?, 0x3005753e800?}, 0xc0006fadc0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00142eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc00142eb60, 0x31f6c38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1706 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc00150bb20?, 0x26e6170?, 0xc00150bb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc00150bbf8?, 0x4d2985?, 0x1c9f5920eb8?, 0x8000?, 0x4c8b01?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x624, {0xc001451366?, 0x2c9a, 0x5841bf?}, 0xc001704f08?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001704f08?, {0xc001451366?, 0x3baa?, 0x3baa?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001704f08, {0xc001451366, 0x2c9a, 0x2c9a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00059a0b0, {0xc001451366?, 0xc00150bd98?, 0x3e2e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017a6180, {0x374bdc0, 0xc000932e38})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc0017a6180}, {0x374bdc0, 0xc000932e38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x374bf00, 0xc0017a6180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d0c36?, {0x374bf00?, 0xc0017a6180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc0017a6180}, {0x374be80, 0xc00059a0b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0007862a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1746
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 802 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdc5224de0?, {0xc001607a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x690, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007ff4a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0005ee000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0005ee000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc001524ea0, 0xc0005ee000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc001524ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc001524ea0, 0x31f6b48)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 803 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001525040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001525040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc001525040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc001525040, 0x31f6b40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 705 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001524b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001524b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc001524b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc001524b60, 0x31f6b18)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 704 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015249c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015249c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0015249c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0015249c0, 0x31f6b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1631 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015256c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015256c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0015256c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0015256c0, 0x31f6c00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1711 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ac1ce0, 0xc0005949c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1794
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1713 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc00150db20?, 0x26e6170?, 0xc00150db58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc00150dbf8?, 0x4d2985?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5a0, {0xc0015bbd9d?, 0x263, 0x5841bf?}, 0x10?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007aaa08?, {0xc0015bbd9d?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007aaa08, {0xc0015bbd9d, 0x263, 0x263})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a64e8, {0xc0015bbd9d?, 0xc00150dd98?, 0xedc?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001652090, {0x374bdc0, 0xc000932010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc001652090}, {0x374bdc0, 0xc000932010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x374bf00, 0xc001652090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4d0c36?, {0x374bf00?, 0xc001652090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc001652090}, {0x374be80, 0xc0000a64e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0019d02a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 802
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1726 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0006a4460)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001525d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001525d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc001525d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc001525d40, 0x31f6c30)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1712 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc00146fb20?, 0x26e6170?, 0xc00146fb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc00146fbf8?, 0x4d2985?, 0x1c9f5920a28?, 0x4d?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x59c, {0xc0007bda10?, 0x5f0, 0x0?}, 0xc00146fc50?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007aa508?, {0xc0007bda10?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007aa508, {0xc0007bda10, 0x5f0, 0x5f0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6110, {0xc0007bda10?, 0x1c9faed1408?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001652060, {0x374bdc0, 0xc0006f6000})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc001652060}, {0x374bdc0, 0xc0006f6000}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00146fe78?, {0x374bf00, 0xc001652060})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00146ff38?, {0x374bf00?, 0xc001652060?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc001652060}, {0x374be80, 0xc0000a6110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000594d80?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 802
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1794 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffdc5224de0?, {0xc0014196a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x608, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0015bf140)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ac1ce0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000ac1ce0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000898d00, 0xc000ac1ce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001419c20?, {0x3759678, 0xc0014406a0}, 0x31f7de8, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3759678?, 0xc0014406a0?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0007a5e28, 0x3b9aca00, 0x1a3185c5000, {0xc0007a5d08?, 0x21de640?, 0x51f288?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000898d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc000898d00, 0xc0006fadc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1729
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1795 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0015f3b20?, 0x3779b78?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000003f30?, 0xc0017d7000?, 0xc0015f3bf8?, 0x4d281b?, 0xc0015f3bc8?, 0x620975?, 0x0?, 0xc0017d7000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x670, {0xc00077ce00?, 0x200, 0x5841bf?}, 0x80?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001705908?, {0xc00077ce00?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001705908, {0xc00077ce00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000682350, {0xc00077ce00?, 0x1c9fae826c8?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0016522d0, {0x374bdc0, 0xc000682418})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc0016522d0}, {0x374bdc0, 0xc000682418}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x882bc5?, {0x374bf00, 0xc0016522d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0015f3eb8?, {0x374bf00?, 0xc0016522d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc0016522d0}, {0x374be80, 0xc000682350}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000a20000?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1794
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1800 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc0015f7b20?, 0x26e6170?, 0xc0015f7b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc0015f7bf8?, 0x4d2985?, 0x1c9f5920a28?, 0xc000929f59?, 0x10?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2a8, {0xc00018793b?, 0x6c5, 0x5841bf?}, 0xc000928260?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a2cc88?, {0xc00018793b?, 0x1000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a2cc88, {0xc00018793b, 0x6c5, 0x6c5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006f6280, {0xc00018793b?, 0xc001bfcc40?, 0x653?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0005b5050, {0x374bdc0, 0xc0000a67f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc0005b5050}, {0x374bdc0, 0xc0000a67f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015f7e78?, {0x374bf00, 0xc0005b5050})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0015f7f38?, {0x374bf00?, 0xc0005b5050?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc0005b5050}, {0x374be80, 0xc0006f6280}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000787380?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1728
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1707 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ac1600, 0xc000786360)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1746
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1705 [syscall, locked to thread]:
syscall.SyscallN(0x4e7ea5?, {0xc0007a1b20?, 0x24179e8?, 0xc0007a1b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4dfdd6?, 0x4bc17a0?, 0xc0007a1bf8?, 0x4d2985?, 0x1c9f5920598?, 0xc0001a374d?, 0x0?, 0xc00079c480?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b0, {0xc00075aa52?, 0x5ae, 0x5841bf?}, 0xc000680908?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001704508?, {0xc00075aa52?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001704508, {0xc00075aa52, 0x5ae, 0x5ae})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00059a070, {0xc00075aa52?, 0xc001bfd180?, 0x20e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017a6150, {0x374bdc0, 0xc0000a67a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x374bf00, 0xc0017a6150}, {0x374bdc0, 0xc0000a67a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007a1e78?, {0x374bf00, 0xc0017a6150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007a1f38?, {0x374bf00?, 0xc0017a6150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x374bf00, 0xc0017a6150}, {0x374be80, 0xc00059a070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000594a20?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1746
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-138900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-138900 --driver=hyperv: exit status 1 (4m59.7740277s)

                                                
                                                
-- stdout --
	* [NoKubernetes-138900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-138900" primary control-plane node in "NoKubernetes-138900" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 06:12:03.880881    7140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-138900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-138900 -n NoKubernetes-138900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-138900 -n NoKubernetes-138900: exit status 7 (3.8746285s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 06:17:03.650316    9008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0624 06:17:07.373254    9008 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-138900".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-138900 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-138900:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-138900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.65s)

                                                
                                    

Test pass (95/134)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.95
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.18
9 TestDownloadOnly/v1.20.0/DeleteAll 1.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.16
12 TestDownloadOnly/v1.30.2/json-events 11.04
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.19
18 TestDownloadOnly/v1.30.2/DeleteAll 1.19
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 1.23
21 TestBinaryMirror 6.69
22 TestOffline 412.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.18
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
27 TestAddons/Setup 431.88
30 TestAddons/parallel/Ingress 67.46
31 TestAddons/parallel/InspektorGadget 25.27
32 TestAddons/parallel/MetricsServer 20.29
33 TestAddons/parallel/HelmTiller 28.28
35 TestAddons/parallel/CSI 94.85
36 TestAddons/parallel/Headlamp 35.91
37 TestAddons/parallel/CloudSpanner 20.37
38 TestAddons/parallel/LocalPath 84.37
39 TestAddons/parallel/NvidiaDevicePlugin 21.29
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 157.4
44 TestAddons/serial/GCPAuth/Namespaces 0.34
45 TestAddons/StoppedEnableDisable 54.23
57 TestErrorSpam/start 16.41
58 TestErrorSpam/status 36.1
59 TestErrorSpam/pause 22.04
60 TestErrorSpam/unpause 22.29
61 TestErrorSpam/stop 60.24
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 205.81
66 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/KubeContext 0.13
72 TestFunctional/serial/CacheCmd/cache/add_remote 349.02
73 TestFunctional/serial/CacheCmd/cache/add_local 60.95
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.18
75 TestFunctional/serial/CacheCmd/cache/list 0.19
78 TestFunctional/serial/CacheCmd/cache/delete 0.34
85 TestFunctional/delete_addon-resizer_images 0.02
86 TestFunctional/delete_my-image_image 0.01
87 TestFunctional/delete_minikube_cached_images 0.01
91 TestMultiControlPlane/serial/StartCluster 719.51
92 TestMultiControlPlane/serial/DeployApp 13.69
94 TestMultiControlPlane/serial/AddWorkerNode 261.11
95 TestMultiControlPlane/serial/NodeLabels 0.19
96 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.35
97 TestMultiControlPlane/serial/CopyFile 637.46
101 TestImageBuild/serial/Setup 194.18
102 TestImageBuild/serial/NormalBuild 9.34
103 TestImageBuild/serial/BuildWithBuildArg 8.87
104 TestImageBuild/serial/BuildWithDockerIgnore 7.65
105 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.45
109 TestJSONOutput/start/Command 212.26
110 TestJSONOutput/start/Audit 0
112 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
113 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
115 TestJSONOutput/pause/Command 8.08
116 TestJSONOutput/pause/Audit 0
118 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
119 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
121 TestJSONOutput/unpause/Command 7.92
122 TestJSONOutput/unpause/Audit 0
124 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
125 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
127 TestJSONOutput/stop/Command 41.09
128 TestJSONOutput/stop/Audit 0
130 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
131 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
132 TestErrorJSONOutput 1.36
137 TestMainNoArgs 0.19
138 TestMinikubeProfile 521.97
141 TestMountStart/serial/StartWithMountFirst 159.37
142 TestMountStart/serial/VerifyMountFirst 9.73
143 TestMountStart/serial/StartWithMountSecond 158.64
144 TestMountStart/serial/VerifyMountSecond 9.55
145 TestMountStart/serial/DeleteFirst 31.47
146 TestMountStart/serial/VerifyMountPostDelete 9.58
147 TestMountStart/serial/Stop 30.92
151 TestMultiNode/serial/FreshStart2Nodes 427.02
152 TestMultiNode/serial/DeployApp2Nodes 8.76
154 TestMultiNode/serial/AddNode 225.25
155 TestMultiNode/serial/MultiNodeLabels 0.18
156 TestMultiNode/serial/ProfileList 9.99
157 TestMultiNode/serial/CopyFile 366.62
158 TestMultiNode/serial/StopNode 78.01
159 TestMultiNode/serial/StartAfterStop 186.81
164 TestPreload 525.03
165 TestScheduledStopWindows 422.07
175 TestNoKubernetes/serial/StartNoK8sWithVersion 0.29
x
+
TestDownloadOnly/v1.20.0/json-events (20.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-455700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-455700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (20.9478197s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-455700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-455700: exit status 85 (172.1485ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT |          |
	|         | -p download-only-455700        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:20:23
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:20:23.941989    3752 out.go:291] Setting OutFile to fd 624 ...
	I0624 03:20:23.943039    3752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:23.943039    3752 out.go:304] Setting ErrFile to fd 628...
	I0624 03:20:23.943039    3752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0624 03:20:23.957282    3752 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0624 03:20:23.968244    3752 out.go:298] Setting JSON to true
	I0624 03:20:23.972735    3752 start.go:129] hostinfo: {"hostname":"minikube1","uptime":14879,"bootTime":1719209544,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:20:23.972735    3752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:20:23.977875    3752 out.go:97] [download-only-455700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:20:23.979897    3752 notify.go:220] Checking for updates...
	W0624 03:20:23.979897    3752 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0624 03:20:23.982576    3752 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:20:23.986212    3752 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:20:23.988987    3752 out.go:169] MINIKUBE_LOCATION=19124
	I0624 03:20:23.991976    3752 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0624 03:20:23.999295    3752 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0624 03:20:23.999956    3752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:20:29.395728    3752 out.go:97] Using the hyperv driver based on user configuration
	I0624 03:20:29.395854    3752 start.go:297] selected driver: hyperv
	I0624 03:20:29.396018    3752 start.go:901] validating driver "hyperv" against <nil>
	I0624 03:20:29.396402    3752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:20:29.444740    3752 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0624 03:20:29.446012    3752 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:20:29.446012    3752 cni.go:84] Creating CNI manager for ""
	I0624 03:20:29.446012    3752 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0624 03:20:29.446805    3752 start.go:340] cluster config:
	{Name:download-only-455700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-455700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:20:29.447513    3752 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:20:29.450805    3752 out.go:97] Downloading VM boot image ...
	I0624 03:20:29.450805    3752 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1718923868-19112-amd64.iso
	I0624 03:20:32.852745    3752 out.go:97] Starting "download-only-455700" primary control-plane node in "download-only-455700" cluster
	I0624 03:20:32.852745    3752 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:20:32.910207    3752 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0624 03:20:32.910207    3752 cache.go:56] Caching tarball of preloaded images
	I0624 03:20:32.910805    3752 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:20:32.916138    3752 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0624 03:20:32.916138    3752 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:32.985432    3752 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0624 03:20:37.770045    3752 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:37.772149    3752 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:38.741849    3752 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0624 03:20:38.750270    3752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-455700\config.json ...
	I0624 03:20:38.750801    3752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-455700\config.json: {Name:mke27e19b86c710f3c1ea1729ba9b82993fae0ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:20:38.752151    3752 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0624 03:20:38.752938    3752 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-455700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-455700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:20:44.889001    4612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1396456s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-455700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-455700: (1.1529406s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (11.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-067200 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-067200 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv: (11.0345959s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (11.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-067200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-067200: exit status 85 (178.9802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT |                     |
	|         | -p download-only-455700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:20 PDT |
	| delete  | -p download-only-455700        | download-only-455700 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT | 24 Jun 24 03:20 PDT |
	| start   | -o=json --download-only        | download-only-067200 | minikube1\jenkins | v1.33.1 | 24 Jun 24 03:20 PDT |                     |
	|         | -p download-only-067200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/24 03:20:47
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0624 03:20:47.380161    7708 out.go:291] Setting OutFile to fd 700 ...
	I0624 03:20:47.380793    7708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:47.380793    7708 out.go:304] Setting ErrFile to fd 704...
	I0624 03:20:47.380793    7708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 03:20:47.409586    7708 out.go:298] Setting JSON to true
	I0624 03:20:47.410442    7708 start.go:129] hostinfo: {"hostname":"minikube1","uptime":14902,"bootTime":1719209544,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0624 03:20:47.410442    7708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0624 03:20:47.415812    7708 out.go:97] [download-only-067200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0624 03:20:47.419903    7708 notify.go:220] Checking for updates...
	I0624 03:20:47.422346    7708 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0624 03:20:47.425055    7708 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0624 03:20:47.428583    7708 out.go:169] MINIKUBE_LOCATION=19124
	I0624 03:20:47.433109    7708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0624 03:20:47.438501    7708 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0624 03:20:47.439438    7708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0624 03:20:52.749549    7708 out.go:97] Using the hyperv driver based on user configuration
	I0624 03:20:52.749549    7708 start.go:297] selected driver: hyperv
	I0624 03:20:52.749549    7708 start.go:901] validating driver "hyperv" against <nil>
	I0624 03:20:52.757831    7708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0624 03:20:52.808882    7708 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0624 03:20:52.810190    7708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0624 03:20:52.810190    7708 cni.go:84] Creating CNI manager for ""
	I0624 03:20:52.810190    7708 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0624 03:20:52.810473    7708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0624 03:20:52.810514    7708 start.go:340] cluster config:
	{Name:download-only-067200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-067200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0624 03:20:52.810514    7708 iso.go:125] acquiring lock: {Name:mk3387573e178fc4369f5d2033fe70cc02f35191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0624 03:20:52.814464    7708 out.go:97] Starting "download-only-067200" primary control-plane node in "download-only-067200" cluster
	I0624 03:20:52.814550    7708 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:20:52.858316    7708 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:20:52.860078    7708 cache.go:56] Caching tarball of preloaded images
	I0624 03:20:52.860259    7708 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:20:52.864213    7708 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0624 03:20:52.864347    7708 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:52.935444    7708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0624 03:20:56.144768    7708 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:56.153593    7708 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0624 03:20:57.019632    7708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0624 03:20:57.020420    7708 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-067200\config.json ...
	I0624 03:20:57.020420    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-067200\config.json: {Name:mk43ed6dc6e44e16fb10ffbc5bde76117259aab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0624 03:20:57.021162    7708 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0624 03:20:57.022432    7708 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.30.2/kubectl.exe
	
	
	* The control-plane node download-only-067200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-067200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:20:58.410472    3736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1876561s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-067200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-067200: (1.2316274s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestBinaryMirror (6.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-877500 --alsologtostderr --binary-mirror http://127.0.0.1:61584 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-877500 --alsologtostderr --binary-mirror http://127.0.0.1:61584 --driver=hyperv: (5.8804465s)
helpers_test.go:175: Cleaning up "binary-mirror-877500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-877500
--- PASS: TestBinaryMirror (6.69s)

                                                
                                    
x
+
TestOffline (412.44s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-138900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-138900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m59.2984441s)
helpers_test.go:175: Cleaning up "offline-docker-138900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-138900
E0624 06:18:21.876249     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-138900: (53.138815s)
--- PASS: TestOffline (412.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-517800
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-517800: exit status 85 (179.1681ms)

                                                
                                                
-- stdout --
	* Profile "addons-517800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:21:09.878629    9320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-517800
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-517800: exit status 85 (173.3643ms)

                                                
                                                
-- stdout --
	* Profile "addons-517800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 03:21:09.867736    3528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (431.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-517800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-517800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m11.8707627s)
--- PASS: TestAddons/Setup (431.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-517800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-517800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-517800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c1d81a31-648f-46f4-a706-215ac6f3bf5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c1d81a31-648f-46f4-a706-215ac6f3bf5a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.006809s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.9318587s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-517800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0624 03:30:35.009665   14208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-517800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 ip: (2.670975s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.31.209.187
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable ingress-dns --alsologtostderr -v=1: (16.1396542s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable ingress --alsologtostderr -v=1: (21.7728128s)
--- PASS: TestAddons/parallel/Ingress (67.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (25.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rgrdr" [196defc3-6078-4c38-864e-764b5e597da2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0120448s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-517800
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-517800: (20.2541228s)
--- PASS: TestAddons/parallel/InspektorGadget (25.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.482ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-q5g7m" [d1ddb2d6-165e-4fa0-b8d4-bd2d32160acd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0151019s
addons_test.go:417: (dbg) Run:  kubectl --context addons-517800 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable metrics-server --alsologtostderr -v=1: (15.0582272s)
--- PASS: TestAddons/parallel/MetricsServer (20.29s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.28s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 6.4542ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-sfcx8" [85b80422-f4d9-4038-ac34-1a41eef86170] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.022913s
addons_test.go:475: (dbg) Run:  kubectl --context addons-517800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-517800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.1376001s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable helm-tiller --alsologtostderr -v=1: (16.0850614s)
--- PASS: TestAddons/parallel/HelmTiller (28.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (94.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 20.4787ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-517800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-517800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c3f3ae55-c110-421c-8be1-fa6bfbe0b7c9] Pending
helpers_test.go:344: "task-pv-pod" [c3f3ae55-c110-421c-8be1-fa6bfbe0b7c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c3f3ae55-c110-421c-8be1-fa6bfbe0b7c9] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.0148005s
addons_test.go:586: (dbg) Run:  kubectl --context addons-517800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-517800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-517800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-517800 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-517800 delete pod task-pv-pod: (1.4147534s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-517800 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-517800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-517800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [911aab9d-6f2f-46e1-a085-80b2bcd6795b] Pending
helpers_test.go:344: "task-pv-pod-restore" [911aab9d-6f2f-46e1-a085-80b2bcd6795b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [911aab9d-6f2f-46e1-a085-80b2bcd6795b] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0202231s
addons_test.go:628: (dbg) Run:  kubectl --context addons-517800 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-517800 delete pod task-pv-pod-restore: (1.7900687s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-517800 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-517800 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.1198504s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable volumesnapshots --alsologtostderr -v=1: (16.1072453s)
--- PASS: TestAddons/parallel/CSI (94.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-517800 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-517800 --alsologtostderr -v=1: (15.8872458s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-4xxnw" [dea35b66-add0-4268-8e9e-dfaad067b0aa] Pending
helpers_test.go:344: "headlamp-7fc69f7444-4xxnw" [dea35b66-add0-4268-8e9e-dfaad067b0aa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-4xxnw" [dea35b66-add0-4268-8e9e-dfaad067b0aa] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.0136507s
--- PASS: TestAddons/parallel/Headlamp (35.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-n4lcp" [c581856c-fc81-4672-af62-4f5765bfe1c4] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0113698s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-517800
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-517800: (15.3489442s)
--- PASS: TestAddons/parallel/CloudSpanner (20.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (84.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-517800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-517800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2d384061-a2cd-496a-82a8-25e4104fafdb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2d384061-a2cd-496a-82a8-25e4104fafdb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2d384061-a2cd-496a-82a8-25e4104fafdb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0183576s
addons_test.go:992: (dbg) Run:  kubectl --context addons-517800 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 ssh "cat /opt/local-path-provisioner/pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 ssh "cat /opt/local-path-provisioner/pvc-55759cb1-7c5b-4df3-a14f-3555c91d3fa5_default_test-pvc/file1": (9.6846593s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-517800 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-517800 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.2222093s)
--- PASS: TestAddons/parallel/LocalPath (84.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ltfsf" [2afb2f39-6132-4d8e-8b6f-344b68dcd8a1] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0080463s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-517800
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-517800: (15.2720131s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-pwqfl" [4e98a17b-e552-40e7-bf37-d1bab32f6d9a] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0204838s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (157.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 26.6819ms
addons_test.go:889: volcano-scheduler stabilized in 26.9337ms
addons_test.go:897: volcano-admission stabilized in 26.9725ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-s7h8j" [d3619b8e-1be8-4760-8ab5-da4ec1274c1b] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.0090956s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-fxrtv" [f90ee03c-b0c5-430a-9b0d-caf807ea9bfb] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.0135914s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-bw9lf" [3f3c9f6c-e53f-457b-8eec-5a74980656f6] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0108466s
addons_test.go:924: (dbg) Run:  kubectl --context addons-517800 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-517800 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-517800 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [89f8ad5b-5768-4e77-93ed-12787503b17c] Pending
helpers_test.go:344: "test-job-nginx-0" [89f8ad5b-5768-4e77-93ed-12787503b17c] Pending: PodScheduled:Unschedulable (all nodes are unavailable: 1 node(s) resource fit failed.)
helpers_test.go:344: "test-job-nginx-0" [89f8ad5b-5768-4e77-93ed-12787503b17c] Pending: PodScheduled:Schedulable (Pod my-volcano/test-job-nginx-0 can possibly be assigned to addons-517800 once resource is released)
helpers_test.go:344: "test-job-nginx-0" [89f8ad5b-5768-4e77-93ed-12787503b17c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [89f8ad5b-5768-4e77-93ed-12787503b17c] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 1m54.0107055s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-517800 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-517800 addons disable volcano --alsologtostderr -v=1: (25.3250841s)
--- PASS: TestAddons/parallel/Volcano (157.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-517800 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-517800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-517800
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-517800: (41.6162584s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-517800
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-517800: (5.0543525s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-517800
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-517800: (4.5935794s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-517800
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-517800: (2.9576658s)
--- PASS: TestAddons/StoppedEnableDisable (54.23s)

                                                
                                    
x
+
TestErrorSpam/start (16.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run: (5.3813044s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run: (5.5278031s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 start --dry-run: (5.4861937s)
--- PASS: TestErrorSpam/start (16.41s)

                                                
                                    
x
+
TestErrorSpam/status (36.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status: (12.5073494s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status: (11.8574635s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 status: (11.7111396s)
--- PASS: TestErrorSpam/status (36.10s)

                                                
                                    
x
+
TestErrorSpam/pause (22.04s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause: (7.5898791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause: (7.1364624s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 pause: (7.2876145s)
--- PASS: TestErrorSpam/pause (22.04s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.29s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause: (7.3848712s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause: (7.3668456s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause
E0624 03:38:21.853672     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:21.880522     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:21.907581     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:21.934774     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:21.978385     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:22.064091     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:22.232861     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:22.554386     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:23.209217     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:24.496268     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:27.063350     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 unpause: (7.5129993s)
--- PASS: TestErrorSpam/unpause (22.29s)

                                                
                                    
x
+
TestErrorSpam/stop (60.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop
E0624 03:38:32.205882     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:38:42.453862     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:39:02.939379     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop: (39.2206027s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop: (10.8867486s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-998200 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-998200 stop: (10.1131794s)
--- PASS: TestErrorSpam/stop (60.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\944\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (205.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-094900 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0624 03:39:43.903463     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 03:41:05.833078     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-094900 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m25.7913007s)
--- PASS: TestFunctional/serial/StartWithProxy (205.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (349.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:3.1: (1m48.0098015s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:3.3
E0624 03:53:21.849497     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:3.3: (2m0.491385s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:latest
E0624 03:54:45.053632     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 cache add registry.k8s.io/pause:latest: (2m0.520977s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (349.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-094900 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3374639373\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-094900 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3374639373\001: (3.2479726s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache add minikube-local-cache-test:functional-094900
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-094900 cache add minikube-local-cache-test:functional-094900: (57.2995309s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-094900 cache delete minikube-local-cache-test:functional-094900
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-094900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-094900
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-094900: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:functional-094900" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-094900": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-094900
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-094900: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-094900": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-094900
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-094900: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-094900": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (719.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-340000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0624 04:23:21.854289     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 04:28:05.085872     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 04:28:21.849994     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-340000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m21.74421s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr: (37.7627516s)
--- PASS: TestMultiControlPlane/serial/StartCluster (719.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- rollout status deployment/busybox: (5.8827615s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- nslookup kubernetes.io: (1.6951246s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- nslookup kubernetes.io: (1.5106952s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-lsn8j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-mg7l6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-340000 -- exec busybox-fc5497c4f-rrqj8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (261.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-340000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-340000 -v=7 --alsologtostderr: (3m31.3229193s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr
E0624 04:38:21.853528     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 status -v=7 --alsologtostderr: (49.7843717s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (261.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-340000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.346727s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (637.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 status --output json -v=7 --alsologtostderr: (48.9687752s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000:/home/docker/cp-test.txt: (9.444237s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt": (9.4826424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000.txt: (9.5467682s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt": (9.473816s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000_ha-340000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000_ha-340000-m02.txt: (16.5315357s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt": (9.409694s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m02.txt": (9.5027301s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000_ha-340000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000_ha-340000-m03.txt: (16.6921002s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt": (9.4676521s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m03.txt": (9.3300903s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000_ha-340000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000_ha-340000-m04.txt: (16.3959126s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test.txt": (9.4734065s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000_ha-340000-m04.txt": (9.5433475s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m02:/home/docker/cp-test.txt: (9.5658103s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt": (9.4065087s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m02.txt: (9.4343444s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt": (9.6339361s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m02_ha-340000.txt
E0624 04:43:21.860088     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m02_ha-340000.txt: (16.4736664s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt": (9.5084023s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000.txt": (9.4542621s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000-m02_ha-340000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000-m02_ha-340000-m03.txt: (16.5998273s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt": (9.4750943s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000-m03.txt": (9.5358742s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000-m02_ha-340000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m02:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000-m02_ha-340000-m04.txt: (16.7805158s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt"
E0624 04:44:45.092511     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test.txt": (9.526325s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000-m02_ha-340000-m04.txt": (9.5350295s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m03:/home/docker/cp-test.txt: (9.7931939s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt": (9.8931631s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m03.txt: (9.7940944s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt": (9.7939855s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m03_ha-340000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m03_ha-340000.txt: (17.0971282s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt": (9.7453014s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000.txt": (9.7902861s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt: (17.1929591s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt": (9.7489564s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000-m02.txt": (9.9080201s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m03:/home/docker/cp-test.txt ha-340000-m04:/home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt: (17.160085s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test.txt": (9.8102401s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test_ha-340000-m03_ha-340000-m04.txt": (9.8427492s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp testdata\cp-test.txt ha-340000-m04:/home/docker/cp-test.txt: (9.9500776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt": (9.8904495s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile727483249\001\cp-test_ha-340000-m04.txt: (9.8003266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt": (9.9145899s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m04_ha-340000.txt
E0624 04:48:21.868117     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000:/home/docker/cp-test_ha-340000-m04_ha-340000.txt: (17.0718472s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt": (9.7333264s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000.txt": (9.8456142s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000-m02:/home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt: (17.0654552s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt": (9.8337683s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m02 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000-m02.txt": (9.7539044s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 cp ha-340000-m04:/home/docker/cp-test.txt ha-340000-m03:/home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt: (17.1509534s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m04 "sudo cat /home/docker/cp-test.txt": (9.7388519s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-340000 ssh -n ha-340000-m03 "sudo cat /home/docker/cp-test_ha-340000-m04_ha-340000-m03.txt": (9.7911522s)
--- PASS: TestMultiControlPlane/serial/CopyFile (637.46s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (194.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-715900 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-715900 --driver=hyperv: (3m14.1779734s)
--- PASS: TestImageBuild/serial/Setup (194.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-715900
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-715900: (9.34139s)
--- PASS: TestImageBuild/serial/NormalBuild (9.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-715900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-715900: (8.8730321s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-715900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-715900: (7.6420601s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-715900
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-715900: (7.4405273s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (212.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-047700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0624 05:01:25.103028     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-047700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m32.2636016s)
--- PASS: TestJSONOutput/start/Command (212.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-047700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-047700 --output=json --user=testUser: (8.0838052s)
--- PASS: TestJSONOutput/pause/Command (8.08s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-047700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-047700 --output=json --user=testUser: (7.9200564s)
--- PASS: TestJSONOutput/unpause/Command (7.92s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (41.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-047700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-047700 --output=json --user=testUser: (41.0942153s)
--- PASS: TestJSONOutput/stop/Command (41.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.36s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-102800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-102800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (217.0355ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c44e7123-a4c8-43d4-80c6-3d4afcf04709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c00d661-624b-4807-b0a7-96238c3c5cfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"5d851378-86a5-42bb-af00-9b56baed32a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8bf3f1eb-0e65-4375-951d-6f09f21b2beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"ae74a0c2-d943-4bef-8800-bd9dafb35d1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19124"}}
	{"specversion":"1.0","id":"ddd602fa-7d78-4b1b-bd99-61790de31b0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07b42f80-a1ca-4acc-9d89-1a12a5552901","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:03:32.854010    9324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-102800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-102800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-102800: (1.1362643s)
--- PASS: TestErrorJSONOutput (1.36s)

                                                
                                    
x
+
TestMainNoArgs (0.19s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.19s)

                                                
                                    
x
+
TestMinikubeProfile (521.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-752600 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-752600 --driver=hyperv: (3m18.2739555s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-752600 --driver=hyperv
E0624 05:08:21.873275     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-752600 --driver=hyperv: (3m16.9984936s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-752600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.6998531s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-752600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.5878687s)
helpers_test.go:175: Cleaning up "second-752600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-752600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-752600: (46.524962s)
helpers_test.go:175: Cleaning up "first-752600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-752600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-752600: (40.182768s)
--- PASS: TestMinikubeProfile (521.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (159.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-607600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0624 05:13:21.872507     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-607600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.3679941s)
--- PASS: TestMountStart/serial/StartWithMountFirst (159.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.73s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-607600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-607600 ssh -- ls /minikube-host: (9.7250152s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (158.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-607600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-607600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.6285546s)
--- PASS: TestMountStart/serial/StartWithMountSecond (158.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-607600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-607600 ssh -- ls /minikube-host: (9.5535288s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.55s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-607600 --alsologtostderr -v=5
E0624 05:18:05.110032     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 05:18:21.864131     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-607600 --alsologtostderr -v=5: (31.4678164s)
--- PASS: TestMountStart/serial/DeleteFirst (31.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-607600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-607600 ssh -- ls /minikube-host: (9.5829335s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.58s)

                                                
                                    
x
+
TestMountStart/serial/Stop (30.92s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-607600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-607600: (30.9149299s)
--- PASS: TestMountStart/serial/Stop (30.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (427.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-876600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0624 05:23:21.870619     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
E0624 05:28:21.867398     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-876600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m43.5764127s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr: (23.4317421s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (427.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- rollout status deployment/busybox: (3.6383528s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- nslookup kubernetes.io: (1.6995516s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-ddhfw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-876600 -- exec busybox-fc5497c4f-vqhsz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (225.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-876600 -v 3 --alsologtostderr
E0624 05:33:21.878889     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-876600 -v 3 --alsologtostderr: (3m9.3875762s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr
E0624 05:34:45.129267     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr: (35.8575401s)
--- PASS: TestMultiNode/serial/AddNode (225.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-876600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.9864149s)
--- PASS: TestMultiNode/serial/ProfileList (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (366.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 status --output json --alsologtostderr: (36.405583s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600:/home/docker/cp-test.txt: (9.5921979s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt": (9.6165515s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600.txt: (9.645983s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt": (9.5636385s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt multinode-876600-m02:/home/docker/cp-test_multinode-876600_multinode-876600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt multinode-876600-m02:/home/docker/cp-test_multinode-876600_multinode-876600-m02.txt: (16.8280017s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt": (9.6530817s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test_multinode-876600_multinode-876600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test_multinode-876600_multinode-876600-m02.txt": (9.6464622s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt multinode-876600-m03:/home/docker/cp-test_multinode-876600_multinode-876600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600:/home/docker/cp-test.txt multinode-876600-m03:/home/docker/cp-test_multinode-876600_multinode-876600-m03.txt: (16.9238208s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test.txt": (9.6643947s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test_multinode-876600_multinode-876600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test_multinode-876600_multinode-876600-m03.txt": (9.6233844s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600-m02:/home/docker/cp-test.txt: (9.5500759s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt": (9.5528468s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m02.txt
E0624 05:38:21.867107     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m02.txt: (9.5967584s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt": (9.6179756s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt multinode-876600:/home/docker/cp-test_multinode-876600-m02_multinode-876600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt multinode-876600:/home/docker/cp-test_multinode-876600-m02_multinode-876600.txt: (16.7717479s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt": (9.4918857s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test_multinode-876600-m02_multinode-876600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test_multinode-876600-m02_multinode-876600.txt": (9.5320622s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt multinode-876600-m03:/home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m02:/home/docker/cp-test.txt multinode-876600-m03:/home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt: (16.7263625s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test.txt": (9.4781823s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test_multinode-876600-m02_multinode-876600-m03.txt": (9.5968867s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp testdata\cp-test.txt multinode-876600-m03:/home/docker/cp-test.txt: (9.5273456s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt": (9.5503633s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1652032313\001\cp-test_multinode-876600-m03.txt: (9.5750678s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt": (9.5745348s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt multinode-876600:/home/docker/cp-test_multinode-876600-m03_multinode-876600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt multinode-876600:/home/docker/cp-test_multinode-876600-m03_multinode-876600.txt: (16.9805781s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt": (9.4079675s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test_multinode-876600-m03_multinode-876600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600 "sudo cat /home/docker/cp-test_multinode-876600-m03_multinode-876600.txt": (9.4575242s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt multinode-876600-m02:/home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 cp multinode-876600-m03:/home/docker/cp-test.txt multinode-876600-m02:/home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt: (16.5387226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m03 "sudo cat /home/docker/cp-test.txt": (9.4559863s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 ssh -n multinode-876600-m02 "sudo cat /home/docker/cp-test_multinode-876600-m03_multinode-876600-m02.txt": (9.4523686s)
--- PASS: TestMultiNode/serial/CopyFile (366.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (78.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 node stop m03: (25.1365142s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-876600 status: exit status 7 (26.5621702s)

                                                
                                                
-- stdout --
	multinode-876600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:41:58.208584   13488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-876600 status --alsologtostderr: exit status 7 (26.3066647s)

                                                
                                                
-- stdout --
	multinode-876600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 05:42:24.771559    8620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0624 05:42:24.780137    8620 out.go:291] Setting OutFile to fd 912 ...
	I0624 05:42:24.781276    8620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:42:24.781276    8620 out.go:304] Setting ErrFile to fd 876...
	I0624 05:42:24.781276    8620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0624 05:42:24.795640    8620 out.go:298] Setting JSON to false
	I0624 05:42:24.795640    8620 mustload.go:65] Loading cluster: multinode-876600
	I0624 05:42:24.796047    8620 notify.go:220] Checking for updates...
	I0624 05:42:24.796610    8620 config.go:182] Loaded profile config "multinode-876600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0624 05:42:24.796610    8620 status.go:255] checking status of multinode-876600 ...
	I0624 05:42:24.797570    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:42:27.026935    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:27.027121    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:27.027196    8620 status.go:330] multinode-876600 host status = "Running" (err=<nil>)
	I0624 05:42:27.027196    8620 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:42:27.027335    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:42:29.321074    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:29.321378    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:29.321378    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:42:31.874252    8620 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:42:31.874252    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:31.874477    8620 host.go:66] Checking if "multinode-876600" exists ...
	I0624 05:42:31.886398    8620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0624 05:42:31.886398    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600 ).state
	I0624 05:42:34.049640    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:34.049640    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:34.049791    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600 ).networkadapters[0]).ipaddresses[0]
	I0624 05:42:36.653885    8620 main.go:141] libmachine: [stdout =====>] : 172.31.211.219
	
	I0624 05:42:36.653885    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:36.653885    8620 sshutil.go:53] new ssh client: &{IP:172.31.211.219 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600\id_rsa Username:docker}
	I0624 05:42:36.760694    8620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8741988s)
	I0624 05:42:36.774278    8620 ssh_runner.go:195] Run: systemctl --version
	I0624 05:42:36.798040    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:42:36.826503    8620 kubeconfig.go:125] found "multinode-876600" server: "https://172.31.211.219:8443"
	I0624 05:42:36.826695    8620 api_server.go:166] Checking apiserver status ...
	I0624 05:42:36.840247    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0624 05:42:36.883786    8620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1946/cgroup
	W0624 05:42:36.903572    8620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1946/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0624 05:42:36.916512    8620 ssh_runner.go:195] Run: ls
	I0624 05:42:36.923619    8620 api_server.go:253] Checking apiserver healthz at https://172.31.211.219:8443/healthz ...
	I0624 05:42:36.932155    8620 api_server.go:279] https://172.31.211.219:8443/healthz returned 200:
	ok
	I0624 05:42:36.932155    8620 status.go:422] multinode-876600 apiserver status = Running (err=<nil>)
	I0624 05:42:36.932950    8620 status.go:257] multinode-876600 status: &{Name:multinode-876600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0624 05:42:36.932950    8620 status.go:255] checking status of multinode-876600-m02 ...
	I0624 05:42:36.933153    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:42:39.114253    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:39.114253    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:39.114926    8620 status.go:330] multinode-876600-m02 host status = "Running" (err=<nil>)
	I0624 05:42:39.114926    8620 host.go:66] Checking if "multinode-876600-m02" exists ...
	I0624 05:42:39.115486    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:42:41.270680    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:41.270680    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:41.271166    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:42:43.877134    8620 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:42:43.878102    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:43.878164    8620 host.go:66] Checking if "multinode-876600-m02" exists ...
	I0624 05:42:43.891137    8620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0624 05:42:43.891715    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m02 ).state
	I0624 05:42:46.029117    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0624 05:42:46.029117    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:46.029690    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-876600-m02 ).networkadapters[0]).ipaddresses[0]
	I0624 05:42:48.662927    8620 main.go:141] libmachine: [stdout =====>] : 172.31.221.199
	
	I0624 05:42:48.662927    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:48.663321    8620 sshutil.go:53] new ssh client: &{IP:172.31.221.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-876600-m02\id_rsa Username:docker}
	I0624 05:42:48.757570    8620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8664138s)
	I0624 05:42:48.770636    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0624 05:42:48.793967    8620 status.go:257] multinode-876600-m02 status: &{Name:multinode-876600-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0624 05:42:48.793967    8620 status.go:255] checking status of multinode-876600-m03 ...
	I0624 05:42:48.794670    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-876600-m03 ).state
	I0624 05:42:50.942942    8620 main.go:141] libmachine: [stdout =====>] : Off
	
	I0624 05:42:50.943643    8620 main.go:141] libmachine: [stderr =====>] : 
	I0624 05:42:50.943643    8620 status.go:330] multinode-876600-m03 host status = "Stopped" (err=<nil>)
	I0624 05:42:50.943643    8620 status.go:343] host is not running, skipping remaining checks
	I0624 05:42:50.943643    8620 status.go:257] multinode-876600-m03 status: &{Name:multinode-876600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (78.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (186.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 node start m03 -v=7 --alsologtostderr
E0624 05:43:21.870373     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 node start m03 -v=7 --alsologtostderr: (2m31.322319s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-876600 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-876600 status -v=7 --alsologtostderr: (35.3074091s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (186.81s)

                                                
                                    
x
+
TestPreload (525.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-790500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0624 05:58:21.882946     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-790500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m31.8870271s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-790500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-790500 image pull gcr.io/k8s-minikube/busybox: (8.2970463s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-790500
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-790500: (39.5832529s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-790500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0624 06:03:21.878225     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-790500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m36.1912719s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-790500 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-790500 image list: (7.2637924s)
helpers_test.go:175: Cleaning up "test-preload-790500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-790500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-790500: (41.7658653s)
--- PASS: TestPreload (525.03s)

                                                
                                    
x
+
TestScheduledStopWindows (422.07s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-651600 --memory=2048 --driver=hyperv
E0624 06:08:05.152125     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-651600 --memory=2048 --driver=hyperv: (3m16.8224003s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-651600 --schedule 5m
E0624 06:08:21.890816     944 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-517800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-651600 --schedule 5m: (10.708417s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-651600 -n scheduled-stop-651600
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-651600 -n scheduled-stop-651600: exit status 1 (10.0137959s)

                                                
                                                
** stderr ** 
	W0624 06:08:29.031467   13564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-651600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-651600 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.3920878s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-651600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-651600 --schedule 5s: (10.5241169s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-651600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-651600: exit status 7 (2.2724472s)

                                                
                                                
-- stdout --
	scheduled-stop-651600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 06:09:58.977370    5936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-651600 -n scheduled-stop-651600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-651600 -n scheduled-stop-651600: exit status 7 (2.2670892s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 06:10:01.257056    7080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-651600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-651600
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p scheduled-stop-651600: exit status 1 (2m0.0121498s)

                                                
                                                
** stderr ** 
	W0624 06:10:03.530577    3904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: exit status 1
--- PASS: TestScheduledStopWindows (422.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-138900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-138900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (294.7839ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-138900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0624 06:12:03.578645    8244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                    

Test skip (20/134)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard